[1]
|
K. S. Rao, “Acquisition and Incorporation Prosody Knowledge for Speech Systems in Indian Languages,” Ph.D. Thesis, Indian Institute of Technology Madras, Chennai, May 2005.
|
[2]
|
L. Mary, K. S. Rao, S. V. Gangashetty and B. Yegnanarayana, “Neural Network Models for Capturing Duration and Intonation Knowledge for Language and Speaker Identification,” International Conference on Cognitive and Neural Systems, Boston, May 2004.
|
[3]
|
A. S. M. Kumar, S. Rajendran and B. Yegnanarayana, “Intonation Component of Text-to-Speech System for Hindi,” Computer Speech and Language, Vol. 7, No. 3, 1993, pp. 283-301. doi:10.1006/csla.1993.1015
|
[4]
|
S. Werner and E. Keller, “Prosodic Aspects of Speech,” Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art, the Future Challenges, E. Kelle Edition, John Wiley, Chichester, 1994. pp. 23-40.
|
[5]
|
K. K. Kumar, “Duration and Intonation Knowledge for Text-to-Speech Conversion System for Telugu and Hindi,” Master’s Thesis, Indian Institute of Technology Madras, Chennai, May 2002.
|
[6]
|
S. R. R. Kumar, “Significance of Durational Knowledge for a Text-to-Speech System in an Indian Language,” Master’s Thesis, Indian Institute of Technology Madras, Chennai, March 1990.
|
[7]
|
O. Sayli, “Duration Analysis and Modeling for Turkish Text-to-Speech Synthesis,” Master’s Thesis, Bogaziei University, Istanbul, 2002.
|
[8]
|
A. Chopde, “Itrans Indian Language Transliteration Package Version 5.2 Source.” http://www.aczone.con/itrans/.
|
[9]
|
A. N. Khan, S. V. Gangashetty and S. Rajendran, “Speech Database for Indian Languages—A Priliminary Study,” International Conference on Natural Language Processing, Mumbai, December 2002, pp. 295-301.
|
[10]
|
A. N. Khan, S. V. Gangashetty and B. Yegnanarayana, “Syllabic Properties of Three Indian Languages: Implications for Speech Recognition and Language Identification,” International Conference on Natural Language Processing, Mysore, December 2003, pp. 125-134.
|
[11]
|
O. Fujimura, “Syllable as a Unit of Speech Recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 23, No. 1, 1975, pp. 82-87. doi:10.1109/ TASSP.1975.1162631
|
[12]
|
D. H. Klatt, “Review of Text-to-Speech Conversion for English,” Journal of Acoustic Society of America, Vol. 82, No, 3, 1987, pp. 737-793. doi:10.1121/1.395275
|
[13]
|
S. Haykin, “Neural Networks: A Comprehensive Foundation”, Pearson Education Aisa, Inc., New Delhi, 1999.
|
[14]
|
M. Riedi, “A Neural Network Based Model of Segmental Duration for Speech Synthesis,” Proceedings of European Conference on Speech Communication and Technology, Madrid, September 1995, pp. 599-602.
|
[15]
|
K. S. Rao and B. Yegnanarayana, “Modeling Syllable Duration in Indian Languages Using Neural Networks,” Proceedings of IEEE International Conference on Acoustics, Speech, Signal Processing, Montreal, May 2004, pp. 313-316.
|
[16]
|
W. N. Campbell, “Predicting Segmental Durations for Accommodation within a Syllable-Level Timing Frame-work,” Proceedings of European Conference on Speech Communication and Technology, Berlin, Vol. 2, Septem-ber 1993, pp. 1081-1084.
|
[17]
|
K. S. Rao and B. Yegnanarayana, “Intonation modeling for Indian languages,” Proceedings of International Conference on Spoken Language Processing, Jeju Island, October 2004, pp. 733-736.
|
[18]
|
M. Vainio and T. Altosaar, “Modeling the Microprosody of Pitch and Loudness for Speech Synthesis with Neural Networks,” Proceedings of International Conference on Spoken Language Processing, Sidney, December 1998.
|
[19]
|
S. Lee, K. Hirose and N. Minematsu, “Incoporation of Prosodic Modules for Large Vocabulary Continuous Speech Recognition,” Proceedings of ISCA Workshop on Prosody in Speech Recognition and Understanding, New Jersey, 2001, pp. 97-101.
|
[20]
|
K. Ivano, T. Seki and S. Furui, “Noise Robust Speech Recognition Using F0 Contour Extract by Hough Transform,” Proceedings of International Conference on Spoken Language Processing, Denver, 2002, pp. 941-944.
|
[21]
|
L. Mary and B. Yegnanarayana, “Prosodic Features for Speaker Verification,” Proceedings of International Conference on Spoken Language Processing, Pittsburgh, September 2006, pp. 917-920.
|
[22]
|
L. Mary, “Multi Level Implicit Features for Language and Speaker Recognition,” Ph.D. Thesis, Indian Institute of Technology Madras, Chennai, June 2006.
|
[23]
|
L. Mary and B. Yegnanarayana, “Consonant-Vowel Based Features for Language Identification,” International Conference on Natural Language Processing, Kanpur, December 2005, pp. 103-106.
|
[24]
|
L. Mary, K. S. Rao and B. Yegnanarayana, “Neural Network Classifiers for Language Identification Using Phonotactic and Prosodic Features,” Proceedings of International Conference on Intelligent Sensing and Information Processing (ICISIP), Chennai, January 2005, pp. 404-408. doi:10.1109/ICISIP.2005.1529486
|
[25]
|
S. R. R. Kumar and B. Yegnanarayana, “Significance of Durational Knowledge for Speech Synthesis in Indian Languages,” Proceedings of IEEE Region 10 Conference Convergent Technologies for the Asia-Pacific, Bombay, November 1989, pp. 486-489.
|
[26]
|
E. D. Sontag, “Feedback Stabilization Using Two Hidden Layer Nets,” IEEE Transactions on Neural Networks, Vol. 3, No. 6, November 1992, pp. 981-990. doi:10.1109/ 72.165599
|
[27]
|
B. Yegnanarayana, “Artificial Neural Networks,” Printice-Hall, New Delhi, India, 1999.
|