default search action
Wenhui Wang 0003
Person information
- affiliation: Microsoft Research, Beijing, China
Other persons with the same name
- Wenhui Wang — disambiguation page
- Wenhui Wang 0001 — University of Canterbury, Christchurch, New Zealand
- Wenhui Wang 0002 — Earth Resource Technology, Inc., College Park, MD, USA (and 2 more)
- Wenhui Wang 0004 — Beijing Jiaotong University, Beijing, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j3]Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Songhao Piao, Furu Wei:
Fine-tuning pretrained transformer encoders for sequence-to-sequence learning. Int. J. Mach. Learn. Cybern. 15(5): 1711-1728 (2024) - [c23]Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Qixiang Ye, Furu Wei:
Grounding Multimodal Large Language Models to the World. ICLR 2024 - [i27]Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, Furu Wei:
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits. CoRR abs/2402.17764 (2024) - [i26]Xun Wu, Shaohan Huang, Wenhui Wang, Furu Wei:
Multi-Head Mixture-of-Experts. CoRR abs/2404.15045 (2024) - [i25]Yutao Sun, Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Shuming Ma, Quanlu Zhang, Jianyong Wang, Furu Wei:
You Only Cache Once: Decoder-Decoder Architectures for Language Models. CoRR abs/2405.05254 (2024) - 2023
- [c22]Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei:
Image as a Foreign Language: BEIT Pretraining for Vision and Vision-Language Tasks. CVPR 2023: 19175-19186 - [c21]Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, Furu Wei:
Magneto: A Foundation Transformer. ICML 2023: 36077-36092 - [c20]Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Nils Johan Bertil Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei:
Language Is Not All You Need: Aligning Perception with Language Models. NeurIPS 2023 - [i24]Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei:
Language Is Not All You Need: Aligning Perception with Language Models. CoRR abs/2302.14045 (2023) - [i23]Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei:
Kosmos-2: Grounding Multimodal Large Language Models to the World. CoRR abs/2306.14824 (2023) - [i22]Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei:
LongNet: Scaling Transformers to 1, 000, 000, 000 Tokens. CoRR abs/2307.02486 (2023) - [i21]Tengchao Lv, Yupan Huang, Jingye Chen, Lei Cui, Shuming Ma, Yaoyao Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, Shaoxiang Wu, Guoxin Wang, Cha Zhang, Furu Wei:
Kosmos-2.5: A Multimodal Literate Model. CoRR abs/2309.11419 (2023) - [i20]Wenhui Wang, Shuming Ma, Hanwen Xu, Naoto Usuyama, Jiayu Ding, Hoifung Poon, Furu Wei:
When an Image is Worth 1, 024 x 1, 024 Words: A Case Study in Computational Pathology. CoRR abs/2312.03558 (2023) - 2022
- [c19]Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei:
Distilled Dual-Encoder Model for Vision-Language Understanding. EMNLP 2022: 8901-8913 - [c18]Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Songhao Piao, Furu Wei:
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts. NeurIPS 2022 - [c17]Dongkuan Xu, Subhabrata Mukherjee, Xiaodong Liu, Debadeepta Dey, Wenhui Wang, Xiang Zhang, Ahmed Hassan Awadallah, Jianfeng Gao:
Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models. NeurIPS 2022 - [i19]Dongkuan Xu, Subhabrata Mukherjee, Xiaodong Liu, Debadeepta Dey, Wenhui Wang, Xiang Zhang, Ahmed Hassan Awadallah, Jianfeng Gao:
AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models. CoRR abs/2201.12507 (2022) - [i18]Hangbo Bao, Wenhui Wang, Li Dong, Furu Wei:
VL-BEiT: Generative Vision-Language Pretraining. CoRR abs/2206.01127 (2022) - [i17]Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shuming Ma, Furu Wei:
Language Models are General-Purpose Interfaces. CoRR abs/2206.06336 (2022) - [i16]Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei:
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks. CoRR abs/2208.10442 (2022) - [i15]Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, Furu Wei:
Foundation Transformers. CoRR abs/2210.06423 (2022) - [i14]Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, Furu Wei:
TorchScale: Transformers at Scale. CoRR abs/2211.13184 (2022) - 2021
- [c16]Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, Furu Wei:
Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains. ACL/IJCNLP (Findings) 2021: 460-470 - [c15]Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, Furu Wei:
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. ACL/IJCNLP (Findings) 2021: 2140-2151 - [c14]Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei:
Consistency Regularization for Cross-Lingual Fine-Tuning. ACL/IJCNLP (1) 2021: 3403-3417 - [c13]Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, Ming Zhou:
InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training. NAACL-HLT 2021: 3576-3588 - [i13]Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei:
Consistency Regularization for Cross-Lingual Fine-Tuning. CoRR abs/2106.08226 (2021) - [i12]Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, Furu Wei:
Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains. CoRR abs/2106.13474 (2021) - [i11]Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Furu Wei:
s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning. CoRR abs/2110.13640 (2021) - [i10]Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei:
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts. CoRR abs/2111.02358 (2021) - [i9]Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei:
Distilled Dual-Encoder Model for Vision-Language Understanding. CoRR abs/2112.08723 (2021) - 2020
- [c12]Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, Heyan Huang:
Cross-Lingual Natural Language Generation via Pre-Training. AAAI 2020: 7570-7577 - [c11]Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, Ke Xu:
Harvesting and Refining Question-Answer Pairs for Unsupervised QA. ACL 2020: 6719-6728 - [c10]Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, Hsiao-Wuen Hon:
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training. ICML 2020: 642-652 - [c9]Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, Ming Zhou:
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. NeurIPS 2020 - [i8]Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, Ming Zhou:
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. CoRR abs/2002.10957 (2020) - [i7]Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon:
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training. CoRR abs/2002.12804 (2020) - [i6]Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, Ke Xu:
Harvesting and Refining Question-Answer Pairs for Unsupervised QA. CoRR abs/2005.02925 (2020) - [i5]Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, Ming Zhou:
InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training. CoRR abs/2007.07834 (2020) - [i4]Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, Furu Wei:
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. CoRR abs/2012.15828 (2020)
2010 – 2019
- 2019
- [c8]Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu:
Learning to Ask Unanswerable Questions for Machine Reading Comprehension. ACL (1) 2019: 4238-4248 - [c7]Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Lei Cui, Songhao Piao, Ming Zhou:
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension. MRQA@EMNLP 2019: 14-18 - [c6]Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon:
Unified Language Model Pre-training for Natural Language Understanding and Generation. NeurIPS 2019: 13042-13054 - [i3]Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon:
Unified Language Model Pre-training for Natural Language Understanding and Generation. CoRR abs/1905.03197 (2019) - [i2]Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu:
Learning to Ask Unanswerable Questions for Machine Reading Comprehension. CoRR abs/1906.06045 (2019) - [i1]Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xianling Mao, Heyan Huang:
Cross-Lingual Natural Language Generation via Pre-Training. CoRR abs/1909.10481 (2019) - 2018
- [c5]Wenhui Wang, Baobao Chang, Mairgup Mansur:
Improved Dependency Parsing using Implicit Word Connections Learned from Unlabeled Data. EMNLP 2018: 2857-2863 - [c4]Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, Ming Zhou:
Multiway Attention Networks for Modeling Sentence Pairs. IJCAI 2018: 4411-4417 - 2017
- [c3]Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, Ming Zhou:
Gated Self-Matching Networks for Reading Comprehension and Question Answering. ACL (1) 2017: 189-198 - 2016
- [c2]Wenhui Wang, Baobao Chang:
Graph-based Dependency Parsing with Bidirectional LSTM. ACL (1) 2016 - [c1]Wenhui Wang, Baobao Chang:
Improved Graph-Based Dependency Parsing via Hierarchical LSTM Networks. CCL 2016: 25-32 - 2014
- [j2]Wenhui Wang, Sen Yang, Xiang Zhang, Jing Li:
Drug repositioning by integrating target information through a heterogeneous network model. Bioinform. 30(20): 2923-2930 (2014) - 2013
- [j1]Wenhui Wang, Xiaolin Yin, Yoon Soo Pyon, Matthew Hayes, Jing Li:
Rare variant discovery and calling by sequencing pooled samples with overlaps. Bioinform. 29(1): 29-38 (2013)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-23 20:36 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint