SEARCH ON SPEECH ALBAYZIN 2018 EVALUATION
The ALBAYZIN 2018 Search on Speech evaluation is supported by the Spanish Thematic Network on Speech Technology (RTTH) and is organized by Universidad San Pablo-CEU and AuDIaS from Universidad Autónoma de Madrid.
The evaluation mainly involves searching in audio content a list of terms/queries. Two different tasks are defined:
1) SPOKEN TERM DETECTION (STD), where the input to the system is a list of terms, assumed to be unknown when processing the audio. A set of occurrences for each term detected in the audio files must be generated, along with their timestamps and scores as output. This is the same task as in NIST STD 2006 evaluation  and Open Keyword Search evaluation in 2013 , 2014 , 2015 , and 2016 .
2) QUERY-BY-EXAMPLE SPOKEN TERM DETECTION (QbE STD), where the input to the system is an acoustic example per query and prior knowledge of the correct word/phone transcription corresponding to each query is not available. As in the STD task, a set of occurrences for each query detected in the audio files must be generated, along with their timestamps as output. QbE STD is the same task as those proposed in MediaEval evaluations in 2011, 2012, and 2013 .
Three different databases will be used in the evaluation:
1) MAVIR database, which comprises a set of talks corresponding to MAVIR workshops held in 2006, 2007, and 2008 (http://www.lllf.uam.es/ESP/CorpusMavir.html). This database has been used in previous editions of this evaluation and is used for comparison purposes.
2) COREMAH database, which comprises a set of conversations related to rejection, compliment, and apology spoken by non-native speakers with different levels of proficiency in Spanish (http://www.lllf.uam.es/coremah/).
3) TVE database, which comprises a set of Spanish TV (TVE) programs. This database is currently in preparation by the University of Zaragoza, and data will be ready by June.
PRIMARY EVALUATION METRIC
The Actual Term Weighted Value (ATWV) will be the primary metric for the STD and QbE STD tasks.
Interested groups must register for the evaluation before September 24th 2018, by contacting the organizing team at javiertejedornoguerales@gmail.
Research group (name and acronym)
Institution (university, research center, etc.)
Contact person (name)
More information can be found in the evaluation plan EvaluationPlanSearchonSpeech
 Metze, F., Anguera, X., Barnard, E., Davel, M., Gravier, G.: Language independent search in mediaeval’s spoken web search task. Computer Speech and Language (2014).
 Fiscus, J.G., Ajot, J.G., Garofolo, J.S., Doddington, G.: Results of the 2006 spoken term detection evaluation. In: Proc. of ACM SIGIR. pp. 1-4 (2007)
 NIST: NIST Open Keyword Search 2013 Evaluation (OpenKWS13). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2013), http://www.nist.gov/itl/iad/mig/openkws13.cfm
 NIST: NIST Open Keyword Search 2014 Evaluation (OpenKWS14). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2014), http://www.nist.gov/itl/iad/mig/openkws14.cfm
 NIST: NIST Open Keyword Search 2015 Evaluation (OpenKWS15). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2015), http://www.nist.gov/itl/iad/mig/openkws15.cfm
 NIST: NIST Open Keyword Search 2016 Evaluation (OpenKWS16). National Institute of Standards and Technology (NIST), Washington DC, USA, 1 edn. (July 2016), http://www.nist.gov/itl/iad/mig/openkws16.cfm