Content deleted Content added
m Open access bot: pmc updated in citation with #oabot. |
m v2.05 - Fix errors for CW project (Reference before punctuation - Reference list duplication) |
||
Line 48:
=== State of the art ===
TDNN-based phoneme recognizers compared favourably in early comparisons with HMM-based phone models.<ref name="phoneme detection" /><ref name=":3" /> Modern deep TDNN architectures include many more hidden layers and sub-sample or pool connections over broader contexts at higher layers. They achieve up to 50% word error reduction over [[Mixture model|GMM]]-based acoustic models.<ref name=":4">{{cite book |doi=10.21437/Interspeech.2015-647 |doi-access=free |s2cid=8536162 |chapter=A time delay neural network architecture for efficient modeling of long temporal contexts |title=Interspeech 2015 |date=2015 |last1=Peddinti |first1=Vijayaditya |last2=Povey |first2=Daniel |last3=Khudanpur |first3=Sanjeev |pages=3214–3218 }}</ref><ref name=":5">David Snyder, Daniel Garcia-Romero, Daniel Povey, ''[http://danielpovey.com/files/2015_asru_tdnn_ubm.pdf A Time-Delay Deep Neural Network-Based Universal Background Models for Speaker Recognition]'', Proceedings of ASRU 2015.</ref> While the different layers of TDNNs are intended to learn features of increasing context width, they do model local contexts. When longer-distance relationships and pattern sequences have to be processed, learning states and state-sequences is important and TDNNs can be combined with other modelling techniques.<ref name=":6">{{Cite journal |last1=Haffner |first1=Patrick |last2=Waibel |first2=Alex |date=1991 |title=Multi-State Time Delay Networks for Continuous Speech Recognition |url=https://proceedings.neurips.cc/paper_files/paper/1991/hash/069d3bb002acd8d7dd095917f9efe4cb-Abstract.html |website=proceedings.neurips.cc |volume=4 |publisher=NIPS |pages=135–142}}</ref><ref name=":1" /><ref name=":2" /> TDNN architectures have also been adapted to [[Spiking neural network|Spiking Neural Networks]], leading to state-of-the-art results while lending themselves to energy-efficient [[Neuromorphic chip|hardware implementations]].<ref>{{Cite journal |last=D’Agostino |first=Simone |last2=Moro |first2=Filippo |last3=Torchet |first3=Tristan |last4=Demirağ |first4=Yiğit |last5=Grenouillet |first5=Laurent |last6=Castellani |first6=Niccolò |last7=Indiveri |first7=Giacomo |last8=Vianello |first8=Elisa |last9=Payvand |first9=Melika |date=2024-04-24 |title=DenRAM: neuromorphic dendritic architecture with RRAM for efficient temporal processing with delays |url=https://www.nature.com/articles/s41467-024-47764-w |journal=Nature Communications |language=en |volume=15 |issue=1 |pages=3446 |doi=10.1038/s41467-024-47764-w |issn=2041-1723|pmc=11043378 }}</ref>
== Applications ==
Line 86:
== References ==
{{reflist}}
* {{Cite journal |last1=Hampshire |first1=John |last2=Waibel |first2=Alex |orig-date=November 30, 1989 |editor-last=Touretzky |editor-first=David |title=Connectionist Architectures for Multi-Speaker Phoneme Recognition |url=http://papers.nips.cc/paper/213-connectionist-architectures-for-multi-speaker-phoneme-recognition |journal=Advances in Neural Information Processing Systems 2 |date=1990 |page=203-210}}
* {{Cite journal |last=Waibel |first=Alex |date=1987 |orig-date=December |title=Phoneme Recognition Using Time-Delay Neural Networks |url=https://www.researchgate.net/publication/391037926 |journal=Conference: Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE) |___location=Japan}}
[[Category:Neural network architectures]]
[[Category:1987 in artificial intelligence]]
|