inria-00597907, version 1
Zero-resource audio-only spoken term detection based on a combination of template matching techniques
Armando Muscariello a, 1Guillaume Gravier
b, 1Frédéric Bimbot
b, 1
INTERSPEECH 2011: 12th Annual Conference of the International Speech Communication Association (2011)
Résumé : Spoken term detection is a well-known information retrieval task that seeks to extract contentful information from audio by locating occurrences of known query words of interest. This paper describes a zero-resource approach to such task based on pattern matching of spoken term queries at the acoustic level. The template matching module comprises the cascade of a segmental variant of dynamic time warping and a self-similarity matrix comparison to further improve robustness to speech variability. This solution notably differs from more traditional train and test methods that, while shown to be very accurate, rely upon the availability of large amounts of linguistic resources. We evaluate our framework on different parameterizations of the speech templates: raw MFCC features and Gaussian posteriorgrams, French and English phonetic posteriorgrams output by two different state of the art phoneme recognizers.
- a – INRIA
- b – CNRS
- 1 : METISS (INRIA - IRISA)
- CNRS : UMR6074 – INRIA – Institut National des Sciences Appliquées (INSA) - Rennes – Université de Rennes 1
- Domaine : Informatique/Traitement du signal et de l'image
Sciences de l'ingénieur/Traitement du signal et de l'image
- Commentaire : spoken term detection – template matching – unsupervised learning – posterior features
- inria-00597907, version 1
- http://hal.inria.fr/inria-00597907
- oai:hal.inria.fr:inria-00597907
- Contributeur : Armando Muscariello
- Soumis le : Lundi 8 Août 2011, 13:51:55
- Dernière modification le : Lundi 8 Août 2011, 14:28:15