


default search action
2nd SustaiNLP@EMNLP 2021: Virtual
- Nafise Sadat Moosavi

, Iryna Gurevych, Angela Fan, Thomas Wolf, Yufang Hou, Ana Marasovic, Sujith Ravi:
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, SustaiNLP@EMNLP 2021, Virtual, November 10, 2021. Association for Computational Linguistics 2021, ISBN 978-1-955917-01-8 - Zachary Zhou, Jeffery Kline, Devin Conathan, Glenn Fung:

Low Resource Quadratic Forms for Knowledge Graph Embeddings. 1-10 - Nesrine Bannour, Sahar Ghannay, Aurélie Névéol, Anne-Laure Ligozat:

Evaluating the carbon footprint of NLP methods: a survey and analysis of existing tools. 11-21 - Saleh Soltan, Haidar Khan, Wael Hamza:

Limitations of Knowledge Distillation for Zero-shot Transfer Learning. 22-31 - Sungho Jeon, Michael Strube:

Countering the Influence of Essay Length in Neural Essay Scoring. 32-38 - Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonathan Berant:

Memory-efficient Transformers via Top-k Attention. 39-52 - Yi Liu, Guoan Zhang, Puning Yu, Jianlin Su, Shengfeng Pan:

BioCopy: A Plug-And-Play Span Copy Mechanism in Seq2Seq Models. 53-57 - Georgios Sidiropoulos

, Nikos Voskarides, Svitlana Vakulenko, Evangelos Kanoulas
:
Combining Lexical and Dense Retrieval for Computationally Efficient Multi-hop Question Answering. 58-63 - Yue Zhang, Chengcheng Hu, Yuqi Liu, Hui Fang

, Jimmy Lin:
Learning to Rank in the Age of Muppets: Effectiveness-Efficiency Tradeoffs in Multi-Stage Ranking. 64-73 - Maria Glenski, William Sealy

, Kate Miller, Dustin Arendt:
Improving Synonym Recommendation Using Sentence Context. 74-78 - Gengyu Wang, Xiaochen Hou, Diyi Yang, Kathleen R. McKeown, Jing Huang:

Semantic Categorization of Social Knowledge for Commonsense Question Answering. 79-85 - Lovre Torbarina

, Velimir Mihelcic, Bruno Sarlija, Lukasz Roguski, Zeljko Kraljevic:
Speeding Up Transformer Training By Using Dataset Subsampling - An Exploratory Analysis. 86-95 - Lucas Høyberg Puvis de Chavannes, Mads Guldborg Kjeldgaard Kongsbak, Timmie Rantzau, Leon Derczynski:

Hyperparameter Power Impact in Transformer Language Model Training. 96-118 - Haoyu He, Xingjian Shi, Jonas Mueller, Sheng Zha, Mu Li, George Karypis:

Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing. 119-133 - Zilun Peng, Akshay Budhkar, Ilana Tuil, Jason Levy, Parinaz Sobhani, Raphael Cohen, Jumana Nassour:

Shrinking Bigfoot: Reducing wav2vec 2.0 footprint. 134-141 - Ameeta Agrawal, Suresh Singh

, Lauren Schneider, Michael Samuels:
On the Role of Corpus Ordering in Language Modeling. 142-154 - Vin Sachidananda, Jason S. Kessler, Yi'an Lai:

Efficient Domain Adaptation of Language Models via Adaptive Tokenization. 155-165 - Ankur Gupta, Vivek Gupta:

Unsupervised Contextualized Document Representation. 166-173

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














