


Остановите войну!
for scientists:


default search action
Noah A. Smith
Person information

- affiliation: University of Washington, Seattle, WA, USA
- affiliation: Allen Institute for AI, Seattle, WA, USA
- affiliation: Carnegie Mellon University, Pittsburgh, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [i158]Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith:
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference. CoRR abs/2301.04761 (2023) - [i157]Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, Noah A. Smith:
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering. CoRR abs/2303.11897 (2023) - [i156]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i155]Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
We're Afraid Language Models Aren't Modeling Ambiguity. CoRR abs/2304.14399 (2023) - [i154]Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi:
Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements. CoRR abs/2305.03695 (2023) - 2022
- [j31]William Merrill, Ashish Sabharwal, Noah A. Smith:
Saturated Transformers are Constant-Depth Threshold Circuits. Trans. Assoc. Comput. Linguistics 10: 843-856 (2022) - [c250]Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi:
Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text. ACL (1) 2022: 7250-7274 - [c249]Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah A. Smith:
ABC: Attention with Bounded-memory Control. ACL (1) 2022: 7469-7483 - [c248]Tal August, Katharina Reinecke, Noah A. Smith:
Generating Scientific Definitions with Controllable Complexity. ACL (1) 2022: 8298-8317 - [c247]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. EMNLP 2022: 602-631 - [c246]Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz:
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers. EMNLP (Findings) 2022: 1403-1416 - [c245]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. EMNLP 2022: 2562-2580 - [c244]Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf:
In-Context Learning for Few-Shot Dialogue State Tracking. EMNLP (Findings) 2022: 2627-2643 - [c243]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith:
Twist Decoding: Diverse Generators Guide Each Other. EMNLP 2022: 4909-4923 - [c242]Bo-Ru Lu, Yushi Hu, Hao Cheng, Noah A. Smith, Mari Ostendorf:
Unsupervised Learning of Hierarchical Conversation Structure. EMNLP (Findings) 2022: 5657-5670 - [c241]Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation. EMNLP (Findings) 2022: 6826-6847 - [c240]Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith:
Modeling Context With Linear Attention for Scalable Document-Level Translation. EMNLP (Findings) 2022: 6931-6939 - [c239]Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, Daniel S. Weld:
GENIE: Toward Reproducible and Standardized Human Evaluation for Text Generation. EMNLP 2022: 11444-11458 - [c238]Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. FAccT 2022: 1877-1894 - [c237]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR 2022 - [c236]Daniel Edmiston, Phillip Keung, Noah A. Smith:
Domain Mismatch Doesn't Always Prevent Cross-lingual Transfer Learning. LREC 2022: 892-899 - [c235]Daniel Cheng, Kyle Yan, Phillip Keung, Noah A. Smith:
The Engage Corpus: A Social Media Dataset for Text-Based Recommender Systems. LREC 2022: 1885-1889 - [c234]Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi:
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics. NAACL-HLT 2022: 780-799 - [c233]Jungo Kasai, Keisuke Sakaguchi, Lavinia Dunagan, Jacob Morrison, Ronan Le Bras, Yejin Choi, Noah A. Smith:
Transparent Human Evaluation for Image Captioning. NAACL-HLT 2022: 3464-3478 - [c232]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fabbri, Yejin Choi, Noah A. Smith:
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand. NAACL-HLT 2022: 3540-3557 - [c231]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c230]Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, Noah A. Smith:
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. NAACL-HLT 2022: 5884-5906 - [c229]Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, Noah A. Smith:
Time Waits for No One! Analysis and Challenges of Temporal Misalignment. NAACL-HLT 2022: 5944-5958 - [i153]Maarten Sap, Anna Jafarpour, Yejin Choi, Noah A. Smith, James W. Pennebaker, Eric Horvitz:
Computational Lens on Cognition: Study Of Autobiographical Versus Imagined Stories With Large-Scale Language Models. CoRR abs/2201.02662 (2022) - [i152]Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation. CoRR abs/2201.05955 (2022) - [i151]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir R. Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. CoRR abs/2201.05966 (2022) - [i150]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. CoRR abs/2201.10474 (2022) - [i149]Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf:
In-Context Learning for Few-Shot Dialogue State Tracking. CoRR abs/2203.08568 (2022) - [i148]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Dragomir R. Radev, Yejin Choi, Noah A. Smith:
Beam Decoding with Controlled Patience. CoRR abs/2204.05424 (2022) - [i147]Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, Noah A. Smith, Daniel Khashabi:
Benchmarking Generalization via In-Context Instructions on 1, 600+ Language Tasks. CoRR abs/2204.07705 (2022) - [i146]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir R. Radev, Yejin Choi, Noah A. Smith:
Twist Decoding: Diverse Generators Guide Each Other. CoRR abs/2205.09273 (2022) - [i145]Bo-Ru Lu, Yushi Hu, Hao Cheng, Noah A. Smith, Mari Ostendorf:
Unsupervised Learning of Hierarchical Conversation Structure. CoRR abs/2205.12244 (2022) - [i144]Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. CoRR abs/2206.05229 (2022) - [i143]Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir R. Radev, Noah A. Smith, Yejin Choi, Kentaro Inui:
RealTime QA: What's the Answer Right Now? CoRR abs/2207.13332 (2022) - [i142]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i141]Wenya Wang, Vivek Srikumar, Hanna Hajishirzi, Noah A. Smith:
Elaboration-Generating Commonsense Question Answering at Scale. CoRR abs/2209.01232 (2022) - [i140]Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Selective Annotation Makes Language Models Better Few-Shot Learners. CoRR abs/2209.01975 (2022) - [i139]Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Binding Language Models in Symbolic Languages. CoRR abs/2210.02875 (2022) - [i138]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. CoRR abs/2210.03350 (2022) - [i137]Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A. Smith:
Transparency Helps Reveal When Language Models Learn Meaning. CoRR abs/2210.07468 (2022) - [i136]Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith:
Modeling Context With Linear Attention for Scalable Document-Level Translation. CoRR abs/2210.08431 (2022) - [i135]Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz:
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers. CoRR abs/2211.03495 (2022) - [i134]Yushi Hu, Hang Hua, Zhengyuan Yang, Weijia Shi, Noah A. Smith, Jiebo Luo:
PromptCap: Prompt-Guided Task-Aware Image Captioning. CoRR abs/2211.09699 (2022) - [i133]Daniel Edmiston, Phillip Keung, Noah A. Smith:
Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning. CoRR abs/2211.16671 (2022) - [i132]Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi:
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors. CoRR abs/2212.00196 (2022) - [i131]Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer:
Demystifying Prompts in Language Models via Perplexity Estimation. CoRR abs/2212.04037 (2022) - [i130]Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
One Embedder, Any Task: Instruction-Finetuned Text Embeddings. CoRR abs/2212.09741 (2022) - [i129]Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi:
Self-Instruct: Aligning Language Model with Self Generated Instructions. CoRR abs/2212.10560 (2022) - 2021
- [j30]Zhaofeng Wu, Hao Peng, Noah A. Smith:
Infusing Finetuning with Semantic Dependencies. Trans. Assoc. Comput. Linguistics 9: 226-242 (2021) - [j29]William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand? Trans. Assoc. Comput. Linguistics 9: 1047-1060 (2021) - [c228]Alexander Miserlis Hoyle, Ana Marasovic, Noah A. Smith:
Promoting Graph Awareness in Linearized Graph-to-Text Generation. ACL/IJCNLP (Findings) 2021: 944-956 - [c227]Kelvin Luu, Xinyi Wu, Rik Koncel-Kedziorski, Kyle Lo, Isabel Cachola, Noah A. Smith:
Explaining Relationships Between Scientific Documents. ACL/IJCNLP (1) 2021: 2130-2144 - [c226]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. ACL/IJCNLP (1) 2021: 5493-5505 - [c225]Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi:
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. ACL/IJCNLP (1) 2021: 6691-6706 - [c224]Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith:
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text. ACL/IJCNLP (1) 2021: 7282-7296 - [c223]Rahul Nadkarni, David Wadden, Iz Beltagy, Noah A. Smith, Hannaneh Hajishirzi, Tom Hope:
Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study. AKBC 2021 - [c222]Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, Noah A. Smith:
Challenges in Automated Debiasing for Toxic Language Detection. EACL 2021: 3143-3155 - [c221]Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A. Smith:
Probing Across Time: What Does RoBERTa Know and When? EMNLP (Findings) 2021: 820-842 - [c220]William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent. EMNLP (1) 2021: 1766-1781 - [c219]Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith:
Competency Problems: On Finding and Removing Artifacts in Language Data. EMNLP (1) 2021: 1801-1813 - [c218]Ivan Montero, Nikolaos Pappas, Noah A. Smith:
Sentence Bottleneck Autoencoders from Transformer Language Models. EMNLP (1) 2021: 1822-1831 - [c217]Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith:
Expected Validation Performance and Estimation of a Random Variable's Maximum. EMNLP (Findings) 2021: 4066-4073 - [c216]Sarah Wiegreffe, Ana Marasovic, Noah A. Smith:
Measuring Association Between Labels and Free-Text Rationales. EMNLP (1) 2021: 10266-10284 - [c215]Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith:
Finetuning Pretrained Transformers into RNNs. EMNLP (1) 2021: 10630-10643 - [c214]Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith:
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation. ICLR 2021 - [c213]Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong:
Random Feature Attention. ICLR 2021 - [c212]Elizabeth Clark, Noah A. Smith:
Choose Your Own Adventure: Paired Suggestions in Collaborative Writing for Evaluating Story Generation Models. NAACL-HLT 2021: 3566-3575 - [c211]Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner:
A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers. NAACL-HLT 2021: 4599-4610 - [i128]Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, Daniel S. Weld:
GENIE: A Leaderboard for Human-in-the-Loop Evaluation of Text Generation. CoRR abs/2101.06561 (2021) - [i127]Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
Challenges in Automated Debiasing for Toxic Language Detection. CoRR abs/2102.00086 (2021) - [i126]Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong:
Random Feature Attention. CoRR abs/2103.02143 (2021) - [i125]Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith:
Finetuning Pretrained Transformers into RNNs. CoRR abs/2103.13076 (2021) - [i124]Leo Z. Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A. Smith:
Probing Across Time: What Does RoBERTa Know and When? CoRR abs/2104.07885 (2021) - [i123]Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith:
Competency Problems: On Finding and Removing Artifacts in Language Data. CoRR abs/2104.08646 (2021) - [i122]Rik Koncel-Kedziorski, Noah A. Smith:
Go Forth and Prosper: Language Modeling with Ancient Textual History. CoRR abs/2104.08742 (2021) - [i121]William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? CoRR abs/2104.10809 (2021) - [i120]Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner:
A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers. CoRR abs/2105.03011 (2021) - [i119]Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi:
On-the-Fly Controlled Text Generation with Experts and Anti-Experts. CoRR abs/2105.03023 (2021) - [i118]Ethan C. Chau, Noah A. Smith:
Specializing Multilingual Language Models: An Empirical Study. CoRR abs/2106.09063 (2021) - [i117]Rahul Nadkarni, David Wadden, Iz Beltagy, Noah A. Smith, Hannaneh Hajishirzi, Tom Hope:
Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study. CoRR abs/2106.09700 (2021) - [i116]William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
On the Power of Saturated Transformers: A View from Circuit Complexity. CoRR abs/2106.16213 (2021) - [i115]Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith:
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text. CoRR abs/2107.00061 (2021) - [i114]Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi:
Scarecrow: A Framework for Scrutinizing Machine Text. CoRR abs/2107.01294 (2021) - [i113]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i112]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. CoRR abs/2108.12409 (2021) - [i111]Ivan Montero, Nikolaos Pappas, Noah A. Smith:
Sentence Bottleneck Autoencoders from Transformer Language Models. CoRR abs/2109.00055 (2021) - [i110]Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith:
Expected Validation Performance and Estimation of a Random Variable's Maximum. CoRR abs/2110.00613 (2021) - [i109]Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah A. Smith:
ABC: Attention with Bounded-memory Control. CoRR abs/2110.02488 (2021) - [i108]Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, Noah A. Smith:
Time Waits for No One! Analysis and Challenges of Temporal Misalignment. CoRR abs/2111.07408 (2021) - [i107]Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, Noah A. Smith:
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. CoRR abs/2111.07997 (2021) - [i106]Jungo Kasai, Keisuke Sakaguchi, Lavinia Dunagan, Jacob Morrison, Ronan Le Bras, Yejin Choi, Noah A. Smith:
Transparent Human Evaluation for Image Captioning. CoRR abs/2111.08940 (2021) - [i105]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fabbri, Yejin Choi, Noah A. Smith:
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand. CoRR abs/2112.04139 (2021) - [i104]Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi:
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics. CoRR abs/2112.08726 (2021) - 2020
- [j28]Noah A. Smith:
Contextual word representations: putting words into computers. Commun. ACM 63(6): 66-74 (2020) - [j27]Roy Schwartz, Jesse Dodge, Noah A. Smith, Oren Etzioni:
Green AI. Commun. ACM 63(12): 54-63 (2020) - [j26]Marta R. Costa-jussà, Cristina España-Bonet, Pascale Fung, Noah A. Smith:
Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction. Comput. Linguistics 46(2): 249-255 (2020) - [j25]Dallas Card, Noah A. Smith:
On Consequentialism and Fairness. Frontiers Artif. Intell. 3: 34 (2020) - [j24]Phillip Keung, Julian Salazar
, Yichao Lu, Noah A. Smith:
Unsupervised Bitext Mining and Translation via Self-trained Contextual Embeddings. Trans. Assoc. Comput. Linguistics 8: 828-841 (2020) - [c210]Tal August, Maarten Sap, Elizabeth Clark, Katharina Reinecke, Noah A. Smith:
Exploring the Effect of Author and Reader Identity in Online Story Writing: the STORIESINTHEWILD Corpus. NUSE@ACL 2020: 46-54 - [c209]William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, Eran Yahav
:
A Formal Hierarchy of RNN Architectures. ACL 2020: 443-459 - [c208]Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, James W. Pennebaker:
Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models. ACL 2020: 1970-1978 - [c207]Ofir Press, Noah A. Smith, Omer Levy:
Improving Transformer Models by Reordering their Sublayers. ACL 2020: 2996-3005 - [c206]Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi:
Social Bias Frames: Reasoning about Social and Power Implications of Language. ACL 2020: 5477-5490 - [c205]Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith:
A Mixture of h - 1 Heads is Better than h Heads. ACL 2020: 6566-6577 - [c204]Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, Noah A. Smith:
The Right Tool for the Job: Matching Model and Instance Complexities. ACL 2020: 6640-6651 - [c203]Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, Noah A. Smith:
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. ACL 2020: 8342-8360 - [c202]Tal August, Dallas Card, Gary Hsieh, Noah A. Smith, Katharina Reinecke:
Explain like I am a Scientist: The Linguistic Barriers of Entry to r/science. CHI 2020: 1-12 - [c201]Nikolaos Pappas
, Phoebe Mulcaire, Noah A. Smith:
Grounded Compositional Outputs for Adaptive Language Modeling. EMNLP (1) 2020: 1252-1267 - [c200]Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, <