default search action
Zhenyu Zhang 0015
Person information
- affiliation: University of Texas at Austin, TX, USA
- affiliation: University of Science and Technology of China
Other persons with the same name
- Zhenyu Zhang — disambiguation page
- Zhenyu Zhang 0001 — Chongqing University, College of Communication Engineering, China
- Zhenyu Zhang 0002 — Chongqing University, State Key Laboratory of Coal Mine Disaster Dynamics and Control, China
- Zhenyu Zhang 0003 — Xiangtan University, Department of Mathematics and Computational Science, China
- Zhenyu Zhang 0004 — Chinese Academy of Sciences, Institute of Software, State Key Laboratory of Computer Science, Beijing, China (and 2 more)
- Zhenyu Zhang 0005 — Nanjing University of Science and Technology, School of Computer Science and Engineering, China
- Zhenyu Zhang 0006 — Baidu Inc., Beijing, China (and 1 more)
- Zhenyu Zhang 0007 — University of Science and Technology Beijing, Beijing, China
- Zhenyu Zhang 0008 — Zhejiang University, Hangzhou Shi, China
- Zhenyu Zhang 0009 — San Diego State University, Department of Computer Science, CA, USA (and 1 more)
- Zhenyu Zhang 0010 — Tongji University, School of Economics and Management, Shanghai, China
- Zhenyu Zhang 0011 — Xinjiang University, Xinjiang, Urumqi, China
- Zhenyu Zhang 0012 — Harbin Engineering University, Harbin, China
- Zhenyu Zhang 0013 — Shanghai University, School of Computer Engineering and Science, China
- Zhenyu Zhang 0014 — Beihang University, School of Electronic and Information Engineering, China (and 1 more)
Other persons with a similar name
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c29]Zhen Tan, Tianlong Chen, Zhenyu Zhang, Huan Liu:
Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention. AAAI 2024: 21619-21627 - [c28]Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen:
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy. ICLR 2024 - [c27]Yuandong Tian, Yiping Wang, Zhenyu Zhang, Beidi Chen, Simon Shaolei Du:
JoMA: Demystifying Multilayer Transformers via Joint Dynamics of MLP and Attention. ICLR 2024 - [c26]Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji:
CaM: Cache Merging for Memory-efficient LLMs Inference. ICML 2024 - [c25]Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Kumar Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, Shiwei Liu:
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity. ICML 2024 - [c24]Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen:
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference. ICML 2024 - [c23]Zhangheng Li, Shiwei Liu, Tianlong Chen, Ajay Kumar Jaiswal, Zhenyu Zhang, Dilin Wang, Raghuraman Krishnamoorthi, Shiyu Chang, Zhangyang Wang:
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once. ICML 2024 - [c22]Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian:
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. ICML 2024 - [c21]Zhenyu Zhang, Shiwei Liu, Runjin Chen, Bhavya Kailkhura, Beidi Chen, Atlas Wang:
Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache. MLSys 2024 - [i25]Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z. Pan, Frederic T. Chong, Song Han, Zhangyang Wang:
QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits. CoRR abs/2401.05571 (2024) - [i24]Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen:
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference. CoRR abs/2402.09398 (2024) - [i23]Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian:
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. CoRR abs/2403.03507 (2024) - [i22]Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang:
Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding. CoRR abs/2403.04797 (2024) - [i21]Zhenyu Zhang, Ajay Jaiswal, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang:
Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients. CoRR abs/2407.08296 (2024) - [i20]Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang:
From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. CoRR abs/2407.11239 (2024) - 2023
- [c20]Zhangheng Li, Yu Gong, Zhenyu Zhang, Xingyun Xue, Tianlong Chen, Yi Liang, Bo Yuan, Zhangyang Wang:
Accelerable Lottery Tickets with the Mixed-Precision Quantization. CVPR Workshops 2023: 4604-4612 - [c19]Tianlong Chen, Zhenyu Zhang, Ajay Kumar Jaiswal, Shiwei Liu, Zhangyang Wang:
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers. ICLR 2023 - [c18]Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Kumar Jaiswal, Zhangyang Wang:
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! ICLR 2023 - [c17]Ruisi Cai, Zhenyu Zhang, Zhangyang Wang:
Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? ICML 2023: 3495-3506 - [c16]Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu:
Are Large Kernels Better Teachers than Transformers for ConvNets? ICML 2023: 14023-14038 - [c15]Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark W. Barrett, Zhangyang Wang, Beidi Chen:
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models. NeurIPS 2023 - [c14]Tianlong Chen, Zhenyu Zhang, Hanrui Wang, Jiaqi Gu, Zirui Li, David Z. Pan, Frederic T. Chong, Song Han, Zhangyang Wang:
QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum Circuits. QCE 2023: 51-62 - [i19]Ruisi Cai, Zhenyu Zhang, Zhangyang Wang:
Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? CoRR abs/2302.12480 (2023) - [i18]Tianlong Chen, Zhenyu Zhang, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang:
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers. CoRR abs/2303.01610 (2023) - [i17]Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang:
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! CoRR abs/2303.02141 (2023) - [i16]Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu:
Are Large Kernels Better Teachers than Transformers for ConvNets? CoRR abs/2305.19412 (2023) - [i15]Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark W. Barrett, Zhangyang Wang, Beidi Chen:
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models. CoRR abs/2306.14048 (2023) - [i14]Yuandong Tian, Yiping Wang, Zhenyu Zhang, Beidi Chen, Simon S. Du:
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention. CoRR abs/2310.00535 (2023) - [i13]Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen:
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy. CoRR abs/2310.01334 (2023) - [i12]Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, Shiwei Liu:
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity. CoRR abs/2310.05175 (2023) - [i11]Zhen Tan, Tianlong Chen, Zhenyu Zhang, Huan Liu:
Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention. CoRR abs/2312.15033 (2023) - 2022
- [j1]Tianlong Chen, Zhenyu Zhang, Jun Wu, Randy Huang, Sijia Liu, Shiyu Chang, Zhangyang Wang:
Can You Win Everything with A Lottery Ticket? Trans. Mach. Learn. Res. 2022 (2022) - [c13]Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang:
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. CVPR 2022: 588-599 - [c12]Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang:
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy. CVPR 2022: 12010-12020 - [c11]Tianlong Chen, Zhenyu Zhang, Pengjun Wang, Santosh Balachandra, Haoyu Ma, Zehao Wang, Zhangyang Wang:
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training. ICLR 2022 - [c10]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang:
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. ICML 2022: 3747-3759 - [c9]Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang:
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. ICML 2022: 3760-3772 - [c8]Ruisi Cai, Zhenyu Zhang, Tianlong Chen, Xiaohan Chen, Zhangyang Wang:
Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets. NeurIPS 2022 - [c7]Mukund Varma T., Xuxi Chen, Zhenyu Zhang, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang:
Sparse Winning Tickets are Data-Efficient Image Recognizers. NeurIPS 2022 - [i10]Tianlong Chen, Zhenyu Zhang, Pengjun Wang, Santosh Balachandra, Haoyu Ma, Zehao Wang, Zhangyang Wang:
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training. CoRR abs/2202.09844 (2022) - [i9]Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang:
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy. CoRR abs/2203.06345 (2022) - [i8]Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang:
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. CoRR abs/2205.11819 (2022) - [i7]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang:
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. CoRR abs/2206.04762 (2022) - [i6]Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang:
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. CoRR abs/2206.07839 (2022) - [i5]Kaixiong Zhou, Zhenyu Zhang, Shengyuan Chen, Tianlong Chen, Xiao Huang, Zhangyang Wang, Xia Hu:
QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks. CoRR abs/2211.07379 (2022) - 2021
- [c6]Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang:
"BNN - BN = ?": Training Binary Neural Networks Without Batch Normalization. CVPR Workshops 2021: 4619-4629 - [c5]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang:
Robust Overfitting may be mitigated by properly learned smoothening. ICLR 2021 - [c4]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang:
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning. ICLR 2021 - [c3]Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen:
GANs Can Play Lottery Tickets Too. ICLR 2021 - [c2]Zhenyu Zhang, Xuxi Chen, Tianlong Chen, Zhangyang Wang:
Efficient Lottery Ticket Finding: Less Data is More. ICML 2021: 12380-12390 - [c1]Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang:
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership. NeurIPS 2021: 1780-1791 - [i4]Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang:
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization. CoRR abs/2104.08215 (2021) - [i3]Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen:
GANs Can Play Lottery Tickets Too. CoRR abs/2106.00134 (2021) - [i2]Zhenyu Zhang, Xuxi Chen, Tianlong Chen, Zhangyang Wang:
Efficient Lottery Ticket Finding: Less Data is More. CoRR abs/2106.03225 (2021) - [i1]Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang:
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership. CoRR abs/2111.00162 (2021)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-14 20:57 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint