


default search action
4th BlackboxNLP@EMNLP 2021: Punta Cana, Dominican Republic
- Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad:

Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2021, Punta Cana, Dominican Republic, November 11, 2021. Association for Computational Linguistics 2021, ISBN 978-1-955917-06-3 - Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela, Adina Williams:

To what extent do human explanations of model behavior align with actual model behavior? 1-14 - Jenny Kunz, Marco Kuhlmann:

Test Harder than You Train: Probing with Extrapolation Splits. 15-25 - Hendrik Schuff, Hsiu-Yu Yang, Heike Adel, Ngoc Thang Vu:

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings. 26-41 - Laura Aina, Tal Linzen:

The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. 42-57 - Jannis Vamvas

, Rico Sennrich
:
On the Limits of Minimal Pairs in Contrastive Evaluation. 58-68 - Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd, Sameer Singh:

What Models Know About Their Attackers: Deriving Attacker Information From Latent Representations. 69-78 - Marianna Apidianaki

, Aina Garí Soler
:
ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality. 79-94 - Tessa Masis

, Carolyn Jane Anderson:
ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial Perspective. 95-135 - Rahma Chaabouni, Roberto Dessì, Eugene Kharitonov:

Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN. 136-148 - Tobias Norlund

, Lovisa Hagström
, Richard Johansson:
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it? 149-162 - Bertrand Higy, Lieke Gelderloos, Afra Alishahi, Grzegorz Chrupala

:
Discrete representations in neural models of spoken language. 163-176 - Adly Templeton:

Word Equations: Inherently Interpretable Sparse Word Embeddings through Sparse Coding. 177-191 - Paolo Pedinotti, Eliana Di Palma

, Ludovica Cerini, Alessandro Lenci:
A howling success or a working sea? Testing what BERT knows about metaphors. 192-204 - Minghan Wang, Jiaxin Guo, Yuxia Wang, Yimeng Chen, Chang Su, Hengchao Shang, Min Zhang, Shimin Tao, Hao Yang:

How Length Prediction Influence the Performance of Non-Autoregressive Translation? 205-213 - Marc Tanti

, Lonneke van der Plas
, Claudia Borg
, Albert Gatt
:
On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning. 214-227 - Ting-Rui Chiang, Yun-Nung Chen:

Relating Neural Text Degeneration to Exposure Bias. 228-239 - Robert Schwarzenberg

, Nils Feldhus, Sebastian Möller:
Efficient Explanations from Empirical Explainers. 240-249 - Qinxuan Wu, Allyson Ettinger:

Variation and generality in encoding of syntactic anomaly information in sentence embeddings. 250-264 - Rohan Kumar Yadav, Lei Jiao, Ole-Christoffer Granmo, Morten Goodwin:

Enhancing Interpretable Clauses Semantically using Pretrained Word Representation. 265-274 - Michael Hanna

, David Marecek:
Analyzing BERT's Knowledge of Hypernymy via Prompting. 275-282 - Federico Fancellu, Lan Xiao, Allan D. Jepson, Afsaneh Fazly:

An in-depth look at Euclidean disk embeddings for structure preserving parsing. 283-295 - Arka Talukdar, Monika Dagar, Prachi Gupta, Varun Menon:

Training Dynamic based data filtering may not work for NLP datasets. 296-302 - Lis Kanashiro Pereira, Yuki Taya, Ichiro Kobayashi:

Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently. 303-310 - Guillaume Wisniewski, Lichao Zhu, Nicolas Ballier, François Yvon:

Screening Gender Transfer in Neural Machine Translation. 311-321 - Ayush Kumar, Mukuntha Narayanan Sundararaman

, Jithendra Vepa:
What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study. 322-336 - Hitomi Yanaka, Koji Mineshima:

Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference. 337-349 - Radina Dobreva, Frank Keller:

Investigating Negation in Pre-trained Vision-and-language Models. 350-362 - Nikolay Bogoychev:

Not all parameters are born equal: Attention is mostly what you need. 363-374 - Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi

, Mohammad Taher Pilehvar:
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations. 375-388 - Maria Ryskina

, Kevin Knight:
Learning Mathematical Properties of Integers. 389-395 - Shivin Thukral, Kunal Kukreja, Christian Kavouras:

Probing Language Models for Understanding of Temporal Expressions. 396-406 - Badr Abdullah, Iuliia Zaitova, Tania Avgustinova, Bernd Möbius, Dietrich Klakow:

How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings. 407-419 - Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, Yanjun Qi:

Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. 420-434 - Samuel Stevens, Yu Su:

An Investigation of Language Model Interpretability via Sentence Editing. 435-446 - Parsa Bagherzadeh, Sabine Bergler:

Interacting Knowledge Sources, Inspection and Analysis: Case-studies on Biomedical text processing. 447-456 - Anahita Samadi, Debapriya Banerjee, Shirin Nilizadeh:

Attacks against Ranking Algorithms with Text Embeddings: A Case Study on Recruitment Algorithms. 457-467 - Ionut-Teodor Sorodoc, Gemma Boleda

, Marco Baroni:
Controlled tasks for model analysis: Retrieving discrete information from sequences. 468-478 - Héctor Vázquez Martínez:

The Acceptability Delta Criterion: Testing Knowledge of Language using the Gradience of Sentence Acceptability. 479-495 - Zhiying Jiang, Raphael Tang, Ji Xin, Jimmy Lin:

How Does BERT Rerank Passages? An Attribution Analysis with Information Bottlenecks. 496-509 - Bastien Liétard

, Mostafa Abdou, Anders Søgaard:
Do Language Models Know the Way to Rome? 510-517 - Daisuke Oba, Naoki Yoshinaga, Masashi Toyoda:

Exploratory Model Analysis Using Data-Driven Neuron Representations. 518-528 - Jason Phang, Haokun Liu, Samuel R. Bowman:

Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers. 529-538 - Luke Gessler

, Nathan Schneider:
BERT Has Uncommon Sense: Similarity Ranking for Word Sense BERTology. 539-547

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














