default search action
Nima Mesgarani
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c61]Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani:
Exploring Self-supervised Contrastive Learning of Spatial Sound Event Representation. ICASSP 2024: 1281-1285 - [i36]Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani:
Contextual Feature Extraction Hierarchies Converge in Large Language Models and the Brain. CoRR abs/2401.17671 (2024) - [i35]Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani:
Listen, Chat, and Edit: Text-Guided Soundscape Modification for Enhanced Auditory Experience. CoRR abs/2402.03710 (2024) - [i34]Xilin Jiang, Cong Han, Nima Mesgarani:
Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation. CoRR abs/2403.18257 (2024) - [i33]Siavash Shams, Sukru Samet Dindar, Xilin Jiang, Nima Mesgarani:
SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model. CoRR abs/2405.11831 (2024) - [i32]Xilin Jiang, Yinghao Aaron Li, Adrian Nicolas Florea, Cong Han, Nima Mesgarani:
Speech Slytherin: Examining the Performance and Efficiency of Mamba for Speech Separation, Recognition, and Synthesis. CoRR abs/2407.09732 (2024) - [i31]Cynthia R. Steinhardt, Menoua Keshishian, Nima Mesgarani, Kim Stachenfeld:
DeepSpeech models show Human-like Performance and Processing of Cochlear Implant Inputs. CoRR abs/2407.20535 (2024) - [i30]Yinghao Aaron Li, Xilin Jiang, Jordan Darefsky, Ge Zhu, Nima Mesgarani:
Style-Talker: Finetuning Audio Language Model and Style-Based Text-to-Speech Model for Fast Spoken Dialogue Generation. CoRR abs/2408.11849 (2024) - [i29]Junkai Wu, Xulin Fan, Bo-Ru Lu, Xilin Jiang, Nima Mesgarani, Mark Hasegawa-Johnson, Mari Ostendorf:
Just ASR + LLM? A Study on Speech Large Language Models' Ability to Identify and Understand Speaker in Spoken Dialogue. CoRR abs/2409.04927 (2024) - [i28]Yinghao Aaron Li, Xilin Jiang, Cong Han, Nima Mesgarani:
StyleTTS-ZS: Efficient High-Quality Zero-Shot Text-to-Speech Synthesis with Distilled Time-Varying Style Diffusion. CoRR abs/2409.10058 (2024) - 2023
- [j12]Gavin Mischler, Menoua Keshishian, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani:
Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex. NeuroImage 266: 119819 (2023) - [j11]Gavin Mischler, Vinay S. Raghavan, Menoua Keshishian, Nima Mesgarani:
naplib-python: Neural acoustic data processing and analysis tools in python. Softw. Impacts 17: 100541 (2023) - [c60]Cong Han, Vishal Choudhari, Yinghao Aaron Li, Nima Mesgarani:
Improved Decoding of Attentional Selection in Multi-Talker Environments with Self-Supervised Learned Speech Representation. EMBC 2023: 1-5 - [c59]Cong Han, Nima Mesgarani:
Online Binaural Speech Separation Of Moving Speakers With A Wavesplit Network. ICASSP 2023: 1-5 - [c58]Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani:
Phoneme-Level Bert for Enhanced Prosody of Text-To-Speech with Grapheme Predictions. ICASSP 2023: 1-5 - [c57]Xilin Jiang, Yinghao Aaron Li, Nima Mesgarani:
DeCoR: Defy Knowledge Forgetting by Predicting Earlier Audio Codes. INTERSPEECH 2023: 2818-2822 - [c56]Yinghao Aaron Li, Cong Han, Vinay S. Raghavan, Gavin Mischler, Nima Mesgarani:
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models. NeurIPS 2023 - [c55]Yinghao Aaron Li, Cong Han, Nima Mesgarani:
SLMGAN: Exploiting Speech Language Model Representations for Unsupervised Zero-Shot Voice Conversion in GANs. WASPAA 2023: 1-5 - [i27]Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani:
Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme Predictions. CoRR abs/2301.08810 (2023) - [i26]Cong Han, Vishal Choudhari, Yinghao Aaron Li, Nima Mesgarani:
Improved Decoding of Attentional Selection in Multi-Talker Environments with Self-Supervised Learned Speech Representation. CoRR abs/2302.05756 (2023) - [i25]Cong Han, Nima Mesgarani:
Online Binaural Speech Separation of Moving Speakers With a Wavesplit Network. CoRR abs/2303.07458 (2023) - [i24]Xilin Jiang, Yinghao Aaron Li, Nima Mesgarani:
DeCoR: Defy Knowledge Forgetting by Predicting Earlier Audio Codes. CoRR abs/2305.18441 (2023) - [i23]Yinghao Aaron Li, Cong Han, Vinay S. Raghavan, Gavin Mischler, Nima Mesgarani:
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models. CoRR abs/2306.07691 (2023) - [i22]Yinghao Aaron Li, Cong Han, Nima Mesgarani:
SLMGAN: Exploiting Speech Language Model Representations for Unsupervised Zero-Shot Voice Conversion in GANs. CoRR abs/2307.09435 (2023) - [i21]Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani:
HiFTNet: A Fast High-Quality Neural Vocoder with Harmonic-plus-Noise Filter and Inverse Short Time Fourier Transform. CoRR abs/2309.09493 (2023) - [i20]Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani:
Exploring Self-Supervised Contrastive Learning of Spatial Sound Event Representation. CoRR abs/2309.15938 (2023) - 2022
- [c54]Yinghao Aaron Li, Cong Han, Nima Mesgarani:
Styletts-VC: One-Shot Voice Conversion by Knowledge Transfer From Style-Based TTS Models. SLT 2022: 920-927 - [i19]Yinghao Aaron Li, Cong Han, Nima Mesgarani:
StyleTTS: A Style-Based Generative Model for Natural and Diverse Text-to-Speech Synthesis. CoRR abs/2205.15439 (2022) - [i18]Yinghao Aaron Li, Cong Han, Nima Mesgarani:
StyleTTS-VC: One-Shot Voice Conversion by Knowledge Transfer from Style-Based TTS Models. CoRR abs/2212.14227 (2022) - 2021
- [j10]Giovanni M. Di Liberto, Jingping Nie, Jeremy Yeaton, Bahar Khalighinejad, Shihab A. Shamma, Nima Mesgarani:
Neural representation of linguistic feature hierarchy reflects second-language proficiency. NeuroImage 227: 117586 (2021) - [j9]Bahar Khalighinejad, Prachi Patel, Jose Herrero, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani:
Functional characterization of human Heschl's gyrus in response to natural speech. NeuroImage 235: 118003 (2021) - [j8]Yi Luo, Cong Han, Nima Mesgarani:
Group Communication With Context Codec for Lightweight Source Separation. IEEE ACM Trans. Audio Speech Lang. Process. 29: 1752-1761 (2021) - [c53]Yi Luo, Zhuo Chen, Cong Han, Chenda Li, Tianyan Zhou, Nima Mesgarani:
Rethinking The Separation Layers In Speech Separation Networks. ICASSP 2021: 1-5 - [c52]Yi Luo, Cong Han, Nima Mesgarani:
Ultra-Lightweight Speech Separation Via Group Communication. ICASSP 2021: 16-20 - [c51]Chenxing Li, Jiaming Xu, Nima Mesgarani, Bo Xu:
Speaker and Direction Inferred Dual-Channel Speech Separation. ICASSP 2021: 5779-5783 - [c50]Yinghao Aaron Li, Ali Zare, Nima Mesgarani:
StarGANv2-VC: A Diverse, Unsupervised, Non-Parallel Framework for Natural-Sounding Voice Conversion. Interspeech 2021: 1349-1353 - [c49]Cong Han, Yi Luo, Chenda Li, Tianyan Zhou, Keisuke Kinoshita, Shinji Watanabe, Marc Delcroix, Hakan Erdogan, John R. Hershey, Nima Mesgarani, Zhuo Chen:
Continuous Speech Separation Using Speaker Inventory for Long Recording. Interspeech 2021: 3036-3040 - [c48]Yi Luo, Nima Mesgarani:
Implicit Filter-and-Sum Network for End-to-End Multi-Channel Speech Separation. Interspeech 2021: 3071-3075 - [c47]Yi Luo, Cong Han, Nima Mesgarani:
Empirical Analysis of Generalized Iterative Speech Separation Networks. Interspeech 2021: 3485-3489 - [c46]Cong Han, Yi Luo, Nima Mesgarani:
Binaural Speech Separation of Moving Speakers With Preserved Spatial Cues. Interspeech 2021: 3505-3509 - [c45]Menoua Keshishian, Samuel Norman-Haignere, Nima Mesgarani:
Understanding Adaptive, Multiscale Temporal Integration In Deep Speech Recognition Systems. NeurIPS 2021: 24455-24467 - [c44]Yi Luo, Cong Han, Nima Mesgarani:
Distortion-Controlled Training for end-to-end Reverberant Speech Separation with Auxiliary Autoencoding Loss. SLT 2021: 825-832 - [i17]Chenxing Li, Jiaming Xu, Nima Mesgarani, Bo Xu:
Speaker and Direction Inferred Dual-channel Speech Separation. CoRR abs/2102.04056 (2021) - [i16]Yinghao Aaron Li, Ali Zare, Nima Mesgarani:
StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion. CoRR abs/2107.10394 (2021) - 2020
- [j7]Enea Ceolini, Jens Hjortkjær, Daniel D. E. Wong, James O'Sullivan, Vinay S. Raghavan, Jose Herrero, Ashesh D. Mehta, Shih-Chii Liu, Nima Mesgarani:
Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception. NeuroImage 223: 117282 (2020) - [c43]Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka:
End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation. ICASSP 2020: 6394-6398 - [c42]Cong Han, Yi Luo, Nima Mesgarani:
Real-Time Binaural Speech Separation with Preserved Spatial Cues. ICASSP 2020: 6404-6408 - [c41]Yi Luo, Nima Mesgarani:
Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss. INTERSPEECH 2020: 2622-2626 - [i15]Cong Han, Yi Luo, Nima Mesgarani:
Real-time binaural speech separation with preserved spatial cues. CoRR abs/2002.06637 (2020) - [i14]Yi Luo, Nima Mesgarani:
Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss. CoRR abs/2003.12326 (2020) - [i13]Yi Luo, Cong Han, Nima Mesgarani:
Ultra-Lightweight Speech Separation via Group Communication. CoRR abs/2011.08397 (2020) - [i12]Yi Luo, Zhuo Chen, Cong Han, Chenda Li, Tianyan Zhou, Nima Mesgarani:
Rethinking the Separation Layers in Speech Separation Networks. CoRR abs/2011.08400 (2020) - [i11]Yi Luo, Nima Mesgarani:
Implicit Filter-and-sum Network for Multi-channel Speech Separation. CoRR abs/2011.08401 (2020) - [i10]Yi Luo, Cong Han, Nima Mesgarani:
Group Communication with Context Codec for Ultra-Lightweight Source Separation. CoRR abs/2012.07291 (2020) - [i9]Cong Han, Yi Luo, Chenda Li, Tianyan Zhou, Keisuke Kinoshita, Shinji Watanabe, Marc Delcroix, Hakan Erdogan, John R. Hershey, Nima Mesgarani, Zhuo Chen:
Continuous Speech Separation Using Speaker Inventory for Long Multi-talker Recording. CoRR abs/2012.09727 (2020)
2010 – 2019
- 2019
- [j6]Yi Luo, Nima Mesgarani:
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation. IEEE ACM Trans. Audio Speech Lang. Process. 27(8): 1256-1266 (2019) - [c40]Yi Luo, Cong Han, Nima Mesgarani, Enea Ceolini, Shih-Chii Liu:
FaSNet: Low-Latency Adaptive Beamforming for Multi-Microphone Audio Processing. ASRU 2019: 260-267 - [c39]Cong Han, Yi Luo, Nima Mesgarani:
Online Deep Attractor Network for Real-time Single-channel Speech Separation. ICASSP 2019: 361-365 - [c38]Yi Luo, Nima Mesgarani:
Augmented Time-frequency Mask Estimation in Cluster-based Source Separation Algorithms. ICASSP 2019: 710-714 - [i8]Yi Luo, Enea Ceolini, Cong Han, Shih-Chii Liu, Nima Mesgarani:
FaSNet: Low-latency Adaptive Beamforming for Multi-microphone Audio Processing. CoRR abs/1909.13387 (2019) - [i7]Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka:
End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation. CoRR abs/1910.14104 (2019) - 2018
- [j5]Yi Luo, Zhuo Chen, Nima Mesgarani:
Speaker-Independent Speech Separation With Deep Attractor Network. IEEE ACM Trans. Audio Speech Lang. Process. 26(4): 787-796 (2018) - [c37]Yi Luo, Nima Mesgarani:
TaSNet: Time-Domain Audio Separation Network for Real-Time, Single-Channel Speech Separation. ICASSP 2018: 696-700 - [c36]Hassan Akbari, Himani Arora, Liangliang Cao, Nima Mesgarani:
Lip2Audspec: Speech Reconstruction from Silent Lip Movements Video. ICASSP 2018: 2516-2520 - [c35]Yi Luo, Nima Mesgarani:
Real-time Single-channel Dereverberation and Separation with Time-domain Audio Separation Network. INTERSPEECH 2018: 342-346 - [c34]Rajath Kumar, Yi Luo, Nima Mesgarani:
Music Source Activity Detection and Separation Using Deep Attractor Network. INTERSPEECH 2018: 347-351 - [c33]Nima Mesgarani:
Speech Processing in the Human Brain Meets Deep Learning. INTERSPEECH 2018: 2206 - [i6]Yi Luo, Nima Mesgarani:
TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation. CoRR abs/1809.07454 (2018) - 2017
- [c32]James O'Sullivan, Zhuo Chen, Sameer A. Sheth, Guy McKhann, Ashesh D. Mehta, Nima Mesgarani:
Neural decoding of attentional selection in multi-speaker environments without access to separated sources. EMBC 2017: 1644-1647 - [c31]Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Nima Mesgarani:
Deep clustering and conventional networks for music separation: Stronger together. ICASSP 2017: 61-65 - [c30]Zhuo Chen, Yi Luo, Nima Mesgarani:
Deep attractor network for single-microphone speaker separation. ICASSP 2017: 246-250 - [c29]Bahar Khalighinejad, Tasha Nagamine, Ashesh D. Mehta, Nima Mesgarani:
NAPLib: An open source toolbox for real-time and offline Neural Acoustic Processing. ICASSP 2017: 846-850 - [c28]Tasha Nagamine, Nima Mesgarani:
Understanding the Representation and Computation of Multilayer Perceptrons: A Case Study in Speech Recognition. ICML 2017: 2564-2573 - [i5]Zhuo Chen, Yi Luo, Nima Mesgarani:
Speaker-independent Speech Separation with Deep Attractor Network. CoRR abs/1707.03634 (2017) - [i4]Hassan Akbari, Himani Arora, Liangliang Cao, Nima Mesgarani:
Lip2AudSpec: Speech reconstruction from silent lip movements video. CoRR abs/1710.09798 (2017) - [i3]Yi Luo, Nima Mesgarani:
TasNet: time-domain audio separation network for real-time, single-channel speech separation. CoRR abs/1711.00541 (2017) - 2016
- [c27]Okko Räsänen, Tasha Nagamine, Nima Mesgarani:
Analyzing distributional learning of phonemic categories in unsupervised deep neural networks. CogSci 2016 - [c26]Bahar Khalighinejad, Laura Kathleen Long, Nima Mesgarani:
Designing a hands-on brain computer interface laboratory course. EMBC 2016: 3010-3014 - [c25]Wenhao Zhang, Hanyu Li, Minda Yang, Nima Mesgarani:
Synaptic depression in deep neural networks for speech processing. ICASSP 2016: 5865-5869 - [c24]Tasha Nagamine, Michael L. Seltzer, Nima Mesgarani:
On the Role of Nonlinear Transformations in Deep Neural Network Acoustic Models. INTERSPEECH 2016: 803-807 - [c23]Tasha Nagamine, Zhuo Chen, Nima Mesgarani:
Adaptation of Neural Networks Constrained by Prior Statistics of Node Co-Activations. INTERSPEECH 2016: 1583-1587 - [i2]Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Nima Mesgarani:
Deep Clustering and Conventional Networks for Music Separation: Stronger Together. CoRR abs/1611.06265 (2016) - [i1]Zhuo Chen, Yi Luo, Nima Mesgarani:
Deep attractor network for single-microphone speaker separation. CoRR abs/1611.08930 (2016) - 2015
- [c22]Minda Yang, Sameer A. Sheth, Catherine A. Schevon, Guy M. McKhann II, Nima Mesgarani:
Speech reconstruction from human auditory cortex with deep neural networks. INTERSPEECH 2015: 1121-1125 - [c21]Tasha Nagamine, Michael L. Seltzer, Nima Mesgarani:
Exploring how deep neural networks form phonemic categories. INTERSPEECH 2015: 1912-1916 - [c20]Nima Mesgarani, Mark D. Plumbley:
Keynote addresses: Reverse engineering the neural mechanisms involved in robust speech processing. WASPAA 2015: 5 - 2014
- [c19]Nagaraj Mahajan, Nima Mesgarani, Hynek Hermansky:
Principal components of auditory spectro-temporal receptive fields. INTERSPEECH 2014: 1983-1987 - [r1]Nima Mesgarani:
Stimulus Reconstruction from Cortical Responses. Encyclopedia of Computational Neuroscience 2014 - 2013
- [c18]Oldrich Plchot, Spyros Matsoukas, Pavel Matejka, Najim Dehak, Jeff Z. Ma, Sandro Cumani, Ondrej Glembek, Hynek Hermansky, Sri Harish Reddy Mallidi, Nima Mesgarani, Richard M. Schwartz, Mehdi Soufifar, Zheng-Hua Tan, Samuel Thomas, Bing Zhang, Xinhui Zhou:
Developing a speaker identification system for the DARPA RATS project. ICASSP 2013: 6768-6772 - 2012
- [c17]Daniel Garcia-Romero, Xinhui Zhou, Dmitry N. Zotkin, Balaji Vasan Srinivasan, Yuancheng Luo, Sriram Ganapathy, Samuel Thomas, Sridhar Krishna Nemala, Garimella S. V. S. Sivaram, Majid Mirbagheri, Sri Harish Reddy Mallidi, Thomas Janu, Padmanabhan Rajan, Nima Mesgarani, Mounya Elhilali, Hynek Hermansky, Shihab A. Shamma, Ramani Duraiswami:
The UMD-JHU 2011 speaker recognition system. ICASSP 2012: 4229-4232 - [c16]Xinhui Zhou, Daniel Garcia-Romero, Nima Mesgarani, Maureen L. Stone, Carol Y. Espy-Wilson, Shihab A. Shamma:
Automatic intelligibility assessment of pathologic speech in head and neck cancer based on auditory-inspired spectro-temporal modulations. INTERSPEECH 2012: 542-545 - [c15]Nima Mesgarani, Edward Chang:
Speech and speaker separation in human auditory cortex. INTERSPEECH 2012: 1480-1483 - [c14]Tim Ng, Bing Zhang, Long Nguyen, Spyros Matsoukas, Xinhui Zhou, Nima Mesgarani, Karel Veselý, Pavel Matejka:
Developing a Speech Activity Detection System for the DARPA RATS Program. INTERSPEECH 2012: 1969-1972 - [c13]Samuel Thomas, Sri Harish Reddy Mallidi, Thomas Janu, Hynek Hermansky, Nima Mesgarani, Xinhui Zhou, Shihab A. Shamma, Tim Ng, Bing Zhang, Long Nguyen, Spyros Matsoukas:
Acoustic and Data-driven Features for Robust Speech Activity Detection. INTERSPEECH 2012: 1985-1988 - 2011
- [c12]Nima Mesgarani, Shihab A. Shamma:
Speech processing with a cortical representation of audio. ICASSP 2011: 5872-5875 - [c11]Nima Mesgarani, Samuel Thomas, Hynek Hermansky:
Adaptive Stream Fusion in Multistream Recognition of Speech. INTERSPEECH 2011: 2329-2332 - [c10]Hynek Hermansky, Nima Mesgarani, Samuel Thomas:
Performance monitoring for robustness in automatic recognition of speechi. MLSLP 2011: 31-34 - 2010
- [j4]Nima Mesgarani, Jonathan B. Fritz, Shihab A. Shamma:
A computational model of rapid task-related plasticity of auditory cortical receptive fields. J. Comput. Neurosci. 28(1): 19-27 (2010) - [j3]Garimella S. V. S. Sivaram, Sridhar Krishna Nemala, Nima Mesgarani, Hynek Hermansky:
Data-Driven and Feedback Based Spectro-Temporal Features for Speech Recognition. IEEE Signal Process. Lett. 17(11): 957-960 (2010) - [c9]Majid Mirbagheri, Nima Mesgarani, Shihab A. Shamma:
Nonlinear filtering of spectrotemporal modulations in speech enhancement. ICASSP 2010: 5478-5481 - [c8]Nima Mesgarani, Samuel Thomas, Hynek Hermansky:
A multistream multiresolution framework for phoneme recognition. INTERSPEECH 2010: 318-321 - [c7]Samuel Thomas, Kailash Patil, Sriram Ganapathy, Nima Mesgarani, Hynek Hermansky:
A phoneme recognition framework based on auditory spectro-temporal receptive fields. INTERSPEECH 2010: 2458-2461 - [c6]Shih-Chii Liu, Nima Mesgarani, John G. Harris, Hynek Hermansky:
The use of spike-based representations for hardware audition systems. ISCAS 2010: 505-508
2000 – 2009
- 2009
- [c5]Nima Mesgarani, Garimella S. V. S. Sivaram, Sridhar Krishna Nemala, Mounya Elhilali, Hynek Hermansky:
Discriminant spectrotemporal features for phoneme recognition. INTERSPEECH 2009: 2983-2986 - 2007
- [j2]Nima Mesgarani, Shihab A. Shamma:
Denoising in the Domain of Spectrotemporal Modulations. EURASIP J. Audio Speech Music. Process. 2007 (2007) - [c4]Nima Mesgarani, Stephen V. David, Shihab A. Shamma:
Representation of Phonemes in Primary Auditory Cortex: How the Brain Analyzes Speech. ICASSP (4) 2007: 765-768 - 2006
- [j1]Nima Mesgarani, Malcolm Slaney, Shihab A. Shamma:
Discrimination of speech from nonspeech based on multiscale spectro-temporal Modulations. IEEE Trans. Speech Audio Process. 14(3): 920-930 (2006) - [c3]Ryan Rifkin, Nima Mesgarani:
Discriminating speech and non-speech with regularized least squares. INTERSPEECH 2006 - 2005
- [c2]Nima Mesgarani, Shihab A. Shamma:
Speech Enhancement Based on Filtering the Spectrotemporal Modulations. ICASSP (1) 2005: 1105-1108 - 2004
- [c1]Nima Mesgarani, Shihab A. Shamma, Malcolm Slaney:
Speech discrimination based on multiscale spectro-temporal modulations. ICASSP (1) 2004: 601-604
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 21:19 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint