default search action
Ju-Chiang Wang
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j6]Yin Zhu, Qiuqiang Kong, Junjie Shi, Shilei Liu, Xuzhou Ye, Ju-Chiang Wang, Hongming Shan, Junping Zhang:
End-to-End Paired Ambisonic-Binaural Audio Rendering. IEEE CAA J. Autom. Sinica 11(2): 502-513 (2024) - [c30]Wei Tsung Lu, Ju-Chiang Wang, Qiuqiang Kong, Yun-Ning Hung:
Music Source Separation With Band-Split Rope Transformer. ICASSP 2024: 481-485 - [c29]Julian D. Parker, Janne Spijkervet, Katerina Kosta, Furkan Yesiler, Boris Kuznetsov, Ju-Chiang Wang, Matt Avent, Jitong Chen, Duc Le:
STEMGEN: A Music Generation Model That Listens. ICASSP 2024: 1116-1120 - [i22]Qiqi He, Xuchen Song, Weituo Hao, Ju-Chiang Wang, Wei-Tsung Lu, Wei Li:
Music Era Recognition Using Supervised Contrastive Learning and Artist Information. CoRR abs/2407.05368 (2024) - [i21]Haonan Chen, Jordan B. L. Smith, Janne Spijkervet, Ju-Chiang Wang, Pei Zou, Bochen Li, Qiuqiang Kong, Xingjian Du:
SymPAC: Scalable Symbolic Music Generation With Prompts And Constraints. CoRR abs/2409.03055 (2024) - [i20]Ju-Chiang Wang, Wei-Tsung Lu, Jitong Chen:
Mel-RoFormer for Vocal Separation and Vocal Melody Transcription. CoRR abs/2409.04702 (2024) - [i19]Ye Bai, Haonan Chen, Jitong Chen, Zhuo Chen, Yi Deng, Xiaohong Dong, Lamtharn Hantrakul, Weituo Hao, Qingqing Huang, Zhongyi Huang, Dongya Jia, Feihu La, Duc Le, Bochen Li, Chumin Li, Hui Li, Xingxing Li, Shouda Liu, Wei-Tsung Lu, Yiqing Lu, Andrew Shaw, Janne Spijkervet, Yakun Sun, Bo Wang, Ju-Chiang Wang, Yuping Wang, Yuxuan Wang, Ling Xu, Yifeng Yang, Chao Yao, Shuo Zhang, Yang Zhang, Yilin Zhang, Hang Zhao, Ziyi Zhao, Dejian Zhong, Shicen Zhou, Pei Zou:
Seed-Music: A Unified Framework for High Quality and Controlled Music Generation. CoRR abs/2409.09214 (2024) - 2023
- [c28]Mojtaba Heydari, Ju-Chiang Wang, Zhiyao Duan:
SingNet: a real-time Singing Voice beat and Downbeat Tracking System. ICASSP 2023: 1-5 - [c27]Wei Tsung Lu, Ju-Chiang Wang, Yun-Ning Hung:
Multitrack Music Transcription with a Time-Frequency Perceiver. ICASSP 2023: 1-5 - [i18]Daiyu Zhang, Ju-Chiang Wang, Katerina Kosta, Jordan B. L. Smith, Shicen Zhou:
Modeling the Rhythm from Lyrics for Melody Generation of Pop Song. CoRR abs/2301.01361 (2023) - [i17]Kin Wai Cheuk, Keunwoo Choi, Qiuqiang Kong, Bochen Li, Minz Won, Ju-Chiang Wang, Yun-Ning Hung, Dorien Herremans:
Jointist: Simultaneous Improvement of Multi-instrument Transcription and Music Source Separation via Joint Training. CoRR abs/2302.00286 (2023) - [i16]Wei Tsung Lu, Ju-Chiang Wang, Yun-Ning Hung:
Multitrack Music Transcription with a Time-Frequency Perceiver. CoRR abs/2306.10785 (2023) - [i15]Wei Tsung Lu, Ju-Chiang Wang, Qiuqiang Kong, Yun-Ning Hung:
Music Source Separation with Band-Split RoPE Transformer. CoRR abs/2309.02612 (2023) - [i14]Yun-Ning Hung, Ju-Chiang Wang, Minz Won, Duc Le:
Scaling Up Music Information Retrieval Training with Semi-Supervised Learning. CoRR abs/2310.01353 (2023) - [i13]Ju-Chiang Wang, Wei Tsung Lu, Minz Won:
Mel-Band RoFormer for Music Source Separation. CoRR abs/2310.01809 (2023) - [i12]Julian D. Parker, Janne Spijkervet, Katerina Kosta, Furkan Yesiler, Boris Kuznetsov, Ju-Chiang Wang, Matt Avent, Jitong Chen, Duc Le:
StemGen: A music generation model that listens. CoRR abs/2312.08723 (2023) - 2022
- [c26]Yun-Ning Hung, Ju-Chiang Wang, Xuchen Song, Wei Tsung Lu, Minz Won:
Modeling Beats and Downbeats with a Time-Frequency Transformer. ICASSP 2022: 401-405 - [c25]Ju-Chiang Wang, Yun-Ning Hung, Jordan B. L. Smith:
To Catch A Chorus, Verse, Intro, or Anything Else: Analyzing a Song with Structural Functions. ICASSP 2022: 416-420 - [c24]Daiyu Zhang, Ju-Chiang Wang, Katerina Kosta, Jordan B. L. Smith, Shicen Zhou:
Modeling the rhythm from lyrics for melody generation of pop songs. ISMIR 2022: 141-148 - [i11]Ju-Chiang Wang, Yun-Ning Hung, Jordan B. L. Smith:
To catch a chorus, verse, intro, or anything else: Analyzing a song with structural functions. CoRR abs/2205.14700 (2022) - [i10]Yun-Ning Hung, Ju-Chiang Wang, Xuchen Song, Wei Tsung Lu, Minz Won:
Modeling Beats and Downbeats with a Time-Frequency Transformer. CoRR abs/2205.14701 (2022) - [i9]Kin Wai Cheuk, Keunwoo Choi, Qiuqiang Kong, Bochen Li, Minz Won, Amy Hung, Ju-Chiang Wang, Dorien Herremans:
Jointist: Joint Learning for Multi-instrument Transcription and Its Applications. CoRR abs/2206.10805 (2022) - [i8]Yin Zhu, Qiuqiang Kong, Junjie Shi, Shilei Liu, Xuzhou Ye, Ju-Chiang Wang, Junping Zhang:
Binaural Rendering of Ambisonic Signals by Neural Networks. CoRR abs/2211.02301 (2022) - [i7]Ju-Chiang Wang, Jordan B. L. Smith, Yun-Ning Hung:
MuSFA: Improving Music Structural Function Analysis with Partially Labeled Data. CoRR abs/2211.15787 (2022) - 2021
- [c23]Jiawen Huang, Ju-Chiang Wang, Jordan B. L. Smith, Xuchen Song, Yuxuan Wang:
Modeling the Compatibility of Stem Tracks to Generate Music Mashups. AAAI 2021: 187-195 - [c22]Ju-Chiang Wang, Jordan B. L. Smith, Jitong Chen, Xuchen Song, Yuxuan Wang:
Supervised Chorus Detection for Popular Music Using Convolutional Neural Network and Multi-Task Learning. ICASSP 2021: 566-570 - [c21]Wei Tsung Lu, Ju-Chiang Wang, Minz Won, Keunwoo Choi, Xuchen Song:
SpecTNT: a Time-Frequency Transformer for Music Audio. ISMIR 2021: 396-403 - [c20]Ju-Chiang Wang, Jordan B. L. Smith, Wei Tsung Lu, Xuchen Song:
Supervised Metric Learning For Music Structure Features. ISMIR 2021: 730-737 - [i6]Jiawen Huang, Ju-Chiang Wang, Jordan B. L. Smith, Xuchen Song, Yuxuan Wang:
Modeling the Compatibility of Stem Tracks to Generate Music Mashups. CoRR abs/2103.14208 (2021) - [i5]Ju-Chiang Wang, Jordan B. L. Smith, Jitong Chen, Xuchen Song, Yuxuan Wang:
Supervised Chorus Detection for Popular Music Using Convolutional Neural Network and Multi-task Learning. CoRR abs/2103.14253 (2021) - [i4]Ju-Chiang Wang, Jordan B. L. Smith, Wei Tsung Lu, Xuchen Song:
Supervised Metric Learning for Music Structure Feature. CoRR abs/2110.09000 (2021) - [i3]Wei Tsung Lu, Ju-Chiang Wang, Minz Won, Keunwoo Choi, Xuchen Song:
SpecTNT: a Time-Frequency Transformer for Music Audio. CoRR abs/2110.09127 (2021)
2010 – 2019
- 2018
- [j5]Yu-Hao Chin, Jia-Ching Wang, Ju-Chiang Wang, Yi-Hsuan Yang:
Predicting the Probability Density Function of Music Emotion Using Emotion Space Mapping. IEEE Trans. Affect. Comput. 9(4): 541-549 (2018) - 2017
- [j4]Yu-An Chen, Ju-Chiang Wang, Yi-Hsuan Yang, Homer H. Chen:
Component Tying for Mixture Model Adaptation in Personalization of Music Emotion Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 25(7): 1409-1420 (2017) - [p2]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Affective Music Information Retrieval. Emotions and Personality in Personalized Services 2017: 227-261 - 2016
- [p1]Yi-Hsuan Yang, Ju-Chiang Wang, Yu-An Chen, Homer H. Chen:
Model Adaptation for Personalized Music Emotion Recognition. Handbook of Pattern Recognition and Computer Vision 2016: 175-193 - 2015
- [j3]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, Shyh-Kang Jeng:
Modeling the Affective Content of Music with a Gaussian Mixture Model. IEEE Trans. Affect. Comput. 6(1): 56-68 (2015) - [c19]Yu-An Chen, Yi-Hsuan Yang, Ju-Chiang Wang, Homer H. Chen:
The AMG1608 dataset for music emotion recognition. ICASSP 2015: 693-697 - [c18]Ju-Chiang Wang, Hsin-Min Wang, Gert R. G. Lanckriet:
A histogram density modeling approach to music emotion recognition. ICASSP 2015: 698-702 - [i2]Ju-Chiang Wang, Hung-Yan Gu, Hsin-Min Wang:
Mandarin Singing Voice Synthesis Based on Harmonic Plus Noise Model and Singing Expression Analysis. CoRR abs/1502.04300 (2015) - [i1]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Affective Music Information Retrieval. CoRR abs/1502.05131 (2015) - 2014
- [j2]Li Su, Chin-Chia Michael Yeh, Jen-Yu Liu, Ju-Chiang Wang, Yi-Hsuan Yang:
A Systematic Evaluation of the Bag-of-Frames Representation for Music Information Retrieval. IEEE Trans. Multim. 16(5): 1188-1200 (2014) - [c17]Chin-Chia Michael Yeh, Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Improving music auto-tagging by intra-song instance bagging. ICASSP 2014: 2139-2143 - [c16]Yu-An Chen, Ju-Chiang Wang, Yi-Hsuan Yang, Homer H. Chen:
Linear regression-based adaptation of music emotion recognition models for personalization. ICASSP 2014: 2149-2153 - [c15]Shuo-Yang Wang, Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Towards time-varying music auto-tagging based on CAL500 expansion. ICME 2014: 1-6 - [c14]Ju-Chiang Wang, Ming-Chi Yen, Yi-Hsuan Yang, Hsin-Min Wang:
Automatic Set List Identification and Song Segmentation for Full-Length Concert Videos. ISMIR 2014: 239-244 - [c13]Shenggao Zhu, Jingli Cai, Jiangang Zhang, Zhonghua Li, Ju-Chiang Wang, Ye Wang:
Bridging the User Intention Gap: an Intelligent and Interactive Multidimensional Music Search Engine. WISMM 2014: 59-64 - 2013
- [c12]Zhonghua Li, Ju-Chiang Wang, Jingli Cai, Zhiyan Duan, Hsin-Min Wang, Ye Wang:
Non-reference audio quality assessment for online live music recordings. ACM Multimedia 2013: 63-72 - 2012
- [c11]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, Shyh-Kang Jeng:
Personalized music emotion recognition via model adaptation. APSIPA 2012: 1-7 - [c10]Ju-Chiang Wang, Hsin-Min Wang, Shyh-Kang Jeng:
Playing with tagging: A real-time tagging music player. ICASSP 2012: 77-80 - [c9]Ju-Chiang Wang, Yi-Hsuan Yang, Kaichun Chang, Hsin-Min Wang, Shyh-Kang Jeng:
Exploring the relationship between categorical and dimensional emotion semantics of music. MIRUM 2012: 63-68 - [c8]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, Shyh-Kang Jeng:
The acoustic emotion gaussians model for emotion-based music annotation and retrieval. ACM Multimedia 2012: 89-98 - [c7]Ju-Chiang Wang, Yi-Hsuan Yang, I-Hong Jhuo, Yen-Yu Lin, Hsin-Min Wang:
The acousticvisual emotion guassians model for automatic generation of music video. ACM Multimedia 2012: 1379-1380 - 2011
- [j1]Hung-Yi Lo, Ju-Chiang Wang, Hsin-Min Wang, Shou-De Lin:
Cost-Sensitive Multi-Label Learning for Audio Tag Annotation and Retrieval. IEEE Trans. Multim. 13(3): 518-529 (2011) - [c6]Hung-Yi Lo, Ju-Chiang Wang, Hsin-Min Wang, Shou-De Lin:
Cost-sensitive stacking for audio tag annotation and retrieval. ICASSP 2011: 2308-2311 - [c5]Ju-Chiang Wang, Meng-Sung Wu, Hsin-Min Wang, Shyh-Kang Jeng:
Query by multi-tags with multi-level preferences for content-based music retrieval. ICME 2011: 1-6 - [c4]Ju-Chiang Wang, Hung-Shin Lee, Hsin-Min Wang, Shyh-Kang Jeng:
Learning the Similarity of Audio Music in Bag-of-frames Representation from Tagged Music Data. ISMIR 2011: 85-90 - [c3]Ju-Chiang Wang, Yu-Chin Shih, Meng-Sung Wu, Hsin-Min Wang, Shyh-Kang Jeng:
Colorizing tags in tag cloud: a novel query-by-tag music search system. ACM Multimedia 2011: 293-302 - 2010
- [c2]Chih-Yi Chiu, Dimitrios Bountouridis, Ju-Chiang Wang, Hsin-Min Wang:
Background music identification through content filtering and min-hash matching. ICASSP 2010: 2414-2417 - [c1]Hung-Yi Lo, Ju-Chiang Wang, Hsin-Min Wang:
Homogeneous segmentation and classifier ensemble for audio tag annotation and retrieval. ICME 2010: 304-309
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 20:16 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint