


Остановите войну!
for scientists:


default search action
Mohit Bansal
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c208]Jie Lei, Tamara L. Berg, Mohit Bansal:
Revealing Single Frame Bias for Video-and-Language Learning. ACL (1) 2023: 487-507 - [c207]Prateek Yadav, Mohit Bansal:
Exclusive Supermask Subnetwork Training for Continual Learning. ACL (Findings) 2023: 569-587 - [c206]Prateek Yadav, Qing Sun, Hantian Ding, Xiaopeng Li, Dejiao Zhang, Ming Tan, Parminder Bhatia, Xiaofei Ma, Ramesh Nallapati, Murali Krishna Ramanathan, Mohit Bansal, Bing Xiang:
Exploring Continual Learning for Code Generation Models. ACL (2) 2023: 782-792 - [c205]Shiyue Zhang, David Wan, Mohit Bansal:
Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization. ACL (1) 2023: 2153-2174 - [c204]Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, Colin Raffel:
Evaluating the Factual Consistency of Large Language Models Through News Summarization. ACL (Findings) 2023: 5220-5255 - [c203]Yu Zhou, Sha Li, Manling Li, Xudong Lin, Shih-Fu Chang, Mohit Bansal, Heng Ji:
Non-Sequential Graph Script Induction via Multimedia Grounding. ACL (1) 2023: 5529-5545 - [c202]Shiyue Zhang, Shijie Wu, Ozan Irsoy, Steven Lu, Mohit Bansal, Mark Dredze, David S. Rosenberg:
MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies. ACL (1) 2023: 9027-9050 - [c201]Swarnadeep Saha, Xinyan Yu, Mohit Bansal, Ramakanth Pasunuru, Asli Celikyilmaz:
MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text Generation. ACL (Findings) 2023: 11069-11090 - [c200]Archiki Prasad, Trung Bui, Seunghyun Yoon, Hanieh Deilamsalehy, Franck Dernoncourt, Mohit Bansal:
MeetingQA: Extractive Question-Answering on Meeting Transcripts. ACL (1) 2023: 15000-15025 - [c199]Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius:
Vision Transformers are Parameter-Efficient Audio-Visual Learners. CVPR 2023: 2299-2309 - [c198]Feng Cheng, Xizi Wang, Jie Lei, David J. Crandall, Mohit Bansal, Gedas Bertasius:
VindLU: A Recipe for Effective Video-and-Language Pretraining. CVPR 2023: 10739-10750 - [c197]Jialu Li, Mohit Bansal:
Improving Vision-and-Language Navigation by Generating Future-View Image Semantics. CVPR 2023: 10803-10812 - [c196]Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, Mohit Bansal:
Unifying Vision, Text, and Layout for Universal Document Processing. CVPR 2023: 19254-19264 - [c195]Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, Mohit Bansal:
Hierarchical Video-Moment Retrieval and Step-Captioning. CVPR 2023: 23056-23065 - [c194]Zixuan Zhang, Heba Elfardy, Markus Dreyer, Kevin Small, Heng Ji, Mohit Bansal:
Enhancing Multi-Document Summarization with Cross-Document Graph-based Information Extraction. EACL 2023: 1688-1699 - [c193]Peter Hase, Mona T. Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, Srinivasan Iyer:
Methods for Measuring, Updating, and Visualizing Factual Beliefs in Language Models. EACL 2023: 2706-2723 - [c192]David Wan, Mengwen Liu, Kathleen R. McKeown, Markus Dreyer, Mohit Bansal:
Faithfulness-Aware Decoding Strategies for Abstractive Summarization. EACL 2023: 2856-2872 - [c191]Yi Fung, Han Wang, Tong Wang, Ali Kebarighotbi, Mohit Bansal, Heng Ji, Prem Natarajan:
DeepMaven: Deep Question Answering on Long-Distance Movie/TV Show Videos with Multimedia Knowledge Extraction and Synthesis. EACL 2023: 3033-3043 - [c190]Lisa Bauer, Hanna Tischer, Mohit Bansal:
Social Commonsense for Explanation and Cultural Bias Discovery. EACL 2023: 3727-3742 - [c189]Archiki Prasad, Peter Hase, Xiang Zhou, Mohit Bansal:
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models. EACL 2023: 3827-3846 - [c188]Swarnadeep Saha, Shiyue Zhang, Peter Hase, Mohit Bansal:
Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees. ICLR 2023 - [c187]Jonathan Pilault, Can Liu, Mohit Bansal, Markus Dreyer:
On Conditional and Compositional Language Model Differentiable Prompting. IJCAI 2023: 4136-4144 - [c186]Xiang Zhou, Aditya Gupta, Shyam Upadhyay, Mohit Bansal, Manaal Faruqui:
Can Sequence-to-Sequence Transformers Naturally Understand Sequential Instructions? *SEM@ACL 2023: 527-534 - [c185]Zineng Tang, Jaemin Cho, Jie Lei, Mohit Bansal:
PERCEIVER-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention. WACV 2023: 4399-4409 - [i207]Peter Hase, Mohit Bansal, Been Kim, Asma Ghandeharioun:
Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models. CoRR abs/2301.04213 (2023) - [i206]David Wan, Mengwen Liu, Kathleen R. McKeown, Markus Dreyer, Mohit Bansal:
Faithfulness-Aware Decoding Strategies for Abstractive Summarization. CoRR abs/2303.03278 (2023) - [i205]Adyasha Maharana, Amita Kamath, Christopher Clark, Mohit Bansal, Aniruddha Kembhavi:
Exposing and Addressing Cross-Task Inconsistency in Unified Vision-Language Models. CoRR abs/2303.16133 (2023) - [i204]Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, Mohit Bansal:
Hierarchical Video-Moment Retrieval and Step-Captioning. CoRR abs/2303.16406 (2023) - [i203]Jialu Li, Mohit Bansal:
Improving Vision-and-Language Navigation by Generating Future-View Image Semantics. CoRR abs/2304.04907 (2023) - [i202]Jaemin Cho, Linjie Li, Zhengyuan Yang, Zhe Gan, Lijuan Wang, Mohit Bansal:
Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation. CoRR abs/2304.06671 (2023) - [i201]Archiki Prasad, Swarnadeep Saha, Xiang Zhou, Mohit Bansal:
ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness. CoRR abs/2304.10703 (2023) - [i200]Yi-Lin Sung, Linjie Li, Kevin Lin, Zhe Gan, Mohit Bansal, Lijuan Wang:
An Empirical Study of Multimodal Model Merging. CoRR abs/2304.14933 (2023) - [i199]David Wan, Shiyue Zhang, Mohit Bansal:
HistAlign: Improving Context Dependency in Language Generation by Aligning with History. CoRR abs/2305.04782 (2023) - [i198]Shoubin Yu, Jaemin Cho, Prateek Yadav, Mohit Bansal:
Self-Chained Image-Language Model for Video Localization and Question Answering. CoRR abs/2305.06988 (2023) - [i197]Zhenhailong Wang, Ansel Blume, Sha Li, Genglin Liu, Jaemin Cho, Zineng Tang, Mohit Bansal, Heng Ji:
Paxion: Patching Action Knowledge in Video-Language Foundation Models. CoRR abs/2305.10683 (2023) - [i196]Zineng Tang, Ziyi Yang, Chenguang Zhu, Michael Zeng, Mohit Bansal:
Any-to-Any Generation via Composable Diffusion. CoRR abs/2305.11846 (2023) - [i195]Jaemin Cho, Abhay Zala, Mohit Bansal:
Visual Programming for Text-to-Image Generation and Evaluation. CoRR abs/2305.15328 (2023) - [i194]Shiyue Zhang, Shijie Wu, Ozan Irsoy, Steven Lu, Mohit Bansal, Mark Dredze, David S. Rosenberg:
MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies. CoRR abs/2305.16958 (2023) - [i193]Yu Zhou, Sha Li, Manling Li, Xudong Lin, Shih-Fu Chang, Mohit Bansal, Heng Ji:
Non-Sequential Graph Script Induction via Multimedia Grounding. CoRR abs/2305.17542 (2023) - [i192]Jialu Li, Mohit Bansal:
PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation. CoRR abs/2305.19195 (2023) - [i191]Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, Mohit Bansal:
Resolving Interference When Merging Models. CoRR abs/2306.01708 (2023) - [i190]Zhuofan Ying, Peter Hase, Mohit Bansal:
Adaptive Contextual Perception: How to Generalize to New Backgrounds and Ambiguous Objects. CoRR abs/2306.05963 (2023) - [i189]Swarnadeep Saha, Peter Hase, Mohit Bansal:
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Theory of Mind. CoRR abs/2306.09299 (2023) - [i188]Jonathan Pilault, Can Liu, Mohit Bansal, Markus Dreyer:
On Conditional and Compositional Language Model Differentiable Prompting. CoRR abs/2307.01446 (2023) - [i187]Prateek Yadav, Qing Sun, Hantian Ding, Xiaopeng Li, Dejiao Zhang, Ming Tan, Xiaofei Ma, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Mohit Bansal, Bing Xiang:
Exploring Continual Learning for Code Generation Models. CoRR abs/2307.02435 (2023) - [i186]Zun Wang, Jialu Li, Yicong Hong, Yi Wang, Qi Wu, Mohit Bansal, Stephen Gould, Hao Tan, Yu Qiao:
Scaling Data Generation in Vision-and-Language Navigation. CoRR abs/2307.15644 (2023) - [i185]Ziyang Wang, Yi-Lin Sung, Feng Cheng, Gedas Bertasius, Mohit Bansal:
Unified Coarse-to-Fine Alignment for Video-Text Retrieval. CoRR abs/2309.10091 (2023) - [i184]Justin Chih-Yao Chen, Swarnadeep Saha, Mohit Bansal:
ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs. CoRR abs/2309.13007 (2023) - [i183]Han Lin, Abhay Zala, Jaemin Cho, Mohit Bansal:
VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning. CoRR abs/2309.15091 (2023) - [i182]Vaidehi Patil, Peter Hase, Mohit Bansal:
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks. CoRR abs/2309.17410 (2023) - [i181]Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao:
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models. CoRR abs/2310.00754 (2023) - [i180]Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen:
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy. CoRR abs/2310.01334 (2023) - [i179]Yi-Lin Sung, Jaehong Yoon, Mohit Bansal:
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models. CoRR abs/2310.02998 (2023) - [i178]Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal:
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models. CoRR abs/2310.05861 (2023) - [i177]Adyasha Maharana, Prateek Yadav, Mohit Bansal:
D2 Pruning: Message Passing for Balancing Diversity and Difficulty in Data Pruning. CoRR abs/2310.07931 (2023) - [i176]Leonardo F. R. Ribeiro, Mohit Bansal, Markus Dreyer:
Generating Summaries with Controllable Readability Levels. CoRR abs/2310.10623 (2023) - [i175]Abhay Zala, Han Lin, Jaemin Cho, Mohit Bansal:
DiagrammerGPT: Generating Open-Domain, Open-Platform Diagrams via LLM Planning. CoRR abs/2310.12128 (2023) - [i174]Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, Xian Li:
Branch-Solve-Merge Improves Large Language Model Evaluation and Generation. CoRR abs/2310.15123 (2023) - [i173]Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, Su Wang:
Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation. CoRR abs/2310.18235 (2023) - [i172]Xiang Zhou, Yichen Jiang, Mohit Bansal:
Data Factors for Better Compositional Generalization. CoRR abs/2311.04420 (2023) - [i171]Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot:
ADaPT: As-Needed Decomposition and Planning with Language Models. CoRR abs/2311.05772 (2023) - [i170]Xiaohui Zhang, Jaehong Yoon, Mohit Bansal, Huaxiu Yao:
Multimodal Representation Learning by Alternating Unimodal Adaptation. CoRR abs/2311.10707 (2023) - 2022
- [j8]Matthew Marge, Carol Y. Espy-Wilson, Nigel G. Ward, Abeer Alwan, Yoav Artzi
, Mohit Bansal, Gilmer L. Blankenship, Joyce Chai, Hal Daumé III, Debadeepta Dey, Mary P. Harper, Thomas Howard
, Casey Kennington
, Ivana Kruijff-Korbayová, Dinesh Manocha
, Cynthia Matuszek, Ross Mead
, Raymond J. Mooney, Roger K. Moore
, Mari Ostendorf, Heather Pon-Barry
, Alexander I. Rudnicky
, Matthias Scheutz
, Robert St. Amant, Tong Sun, Stefanie Tellex, David R. Traum
, Zhou Yu:
Spoken language interaction with robots: Recommendations for future research. Comput. Speech Lang. 71: 101255 (2022) - [j7]Aakash Gupta, Mohit Bansal:
Evaluation and Ranking of E-Government Websites Using Weighted-Combinative Distance-Based Assessment Approach. Int. J. Softw. Innov. 10(1): 1-15 (2022) - [c184]Hyounghun Kim, Doo Soon Kim, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Mohit Bansal:
CAISE: Conversational Agent for Image Search and Editing. AAAI 2022: 10903-10911 - [c183]Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander G. Schwing, Heng Ji:
MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding. AAAI 2022: 11200-11208 - [c182]Hao Tan, Chen-Tse Tsai, Yujie He, Mohit Bansal:
Scientific Chart Summarization: Datasets and Improved Text Modeling. SDU@AAAI 2022 - [c181]Xiang Zhou, Yixin Nie, Mohit Bansal:
Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning. ACL (Findings) 2022: 972-987 - [c180]Swarnadeep Saha, Prateek Yadav, Mohit Bansal:
Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning. ACL (1) 2022: 1190-1208 - [c179]Shiyue Zhang, Benjamin Frey, Mohit Bansal:
How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language. ACL (1) 2022: 1529-1541 - [c178]Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur:
On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets. Insights@ACL 2022: 113-118 - [c177]Shiyue Zhang, Vishrav Chaudhary, Naman Goyal, James Cross, Guillaume Wenzek, Mohit Bansal, Francisco Guzmán:
How Robust is Neural Machine Translation to Language Imbalance in Multilingual Tokenizer Training? AMTA 2022: 97-116 - [c176]Danfeng Guo, Arpit Gupta, Sanchit Agarwal, Jiun-Yu Kao, Shuyang Gao, Arijit Biswas, Chien-Wei Lin, Tagyoung Chung, Mohit Bansal:
GRAVL-BERT: Graphical Visual-Linguistic Representations for Multimodal Coreference Resolution. COLING 2022: 285-297 - [c175]Adyasha Maharana, Mohit Bansal:
GraDA: Graph Generative Data Augmentation for Commonsense Reasoning. COLING 2022: 4499-4516 - [c174]Yi-Lin Sung, Jaemin Cho, Mohit Bansal:
VL-ADAPTER: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks. CVPR 2022: 5217-5227 - [c173]Jialu Li, Hao Tan, Mohit Bansal:
Envedit: Environment Editing for Vision-and-Language Navigation. CVPR 2022: 15386-15396 - [c172]Adyasha Maharana, Darryl Hannan, Mohit Bansal:
StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation. ECCV (37) 2022: 70-87 - [c171]Yan-Bo Lin, Jie Lei, Mohit Bansal, Gedas Bertasius:
EclipSE: Efficient Long-Range Video Retrieval Using Sight and Sound. ECCV (34) 2022: 413-430 - [c170]Swarnadeep Saha, Peter Hase, Nazneen Rajani, Mohit Bansal:
Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations. EMNLP 2022: 2121-2131 - [c169]Lisa Bauer, Karthik Gopalakrishnan, Spandana Gella, Yang Liu, Mohit Bansal, Dilek Hakkani-Tur:
Analyzing the Limits of Self-Supervision in Handling Bias in Language. EMNLP (Findings) 2022: 7372-7386 - [c168]Arjun R. Akula, Spandana Gella, Aishwarya Padmakumar, Mahdi Namazifar, Mohit Bansal, Jesse Thomason, Dilek Hakkani-Tur:
ALFRED-L: Investigating the Role of Language for Action Learning in Interactive Visual Environments. EMNLP 2022: 9369-9378 - [c167]David Wan, Mohit Bansal:
Evaluating and Improving Factuality in Multimodal Abstractive Summarization. EMNLP 2022: 9632-9648 - [c166]Yichen Jiang, Xiang Zhou, Mohit Bansal:
Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality. EMNLP 2022: 11778-11793 - [c165]Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer:
How Much Can CLIP Benefit Vision-and-Language Tasks? ICLR 2022 - [c164]Adyasha Maharana, Quan Hung Tran, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Mohit Bansal:
Multimodal Intent Discovery from Livestream Videos. NAACL-HLT (Findings) 2022: 476-489 - [c163]Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, Mohit Bansal:
Fine-grained Image Captioning with CLIP Reward. NAACL-HLT (Findings) 2022: 517-527 - [c162]Jialu Li, Hao Tan, Mohit Bansal:
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations. NAACL-HLT (Findings) 2022: 633-649 - [c161]Hyounghun Kim, Abhay Zala, Mohit Bansal:
CoSIm: Commonsense Reasoning for Counterfactual Scene Imagination. NAACL-HLT 2022: 911-923 - [c160]Adyasha Maharana, Mohit Bansal:
On Curriculum Learning for Commonsense Reasoning. NAACL-HLT 2022: 983-992 - [c159]David Wan, Mohit Bansal:
FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization. NAACL-HLT 2022: 1010-1028 - [c158]Xiang Zhou, Shiyue Zhang, Mohit Bansal:
Masked Part-Of-Speech Model: Does Modeling Long Context Help Unsupervised POS-tagging? NAACL-HLT 2022: 1099-1114 - [c157]Arthur Brazinskas, Ramesh Nallapati, Mohit Bansal, Markus Dreyer:
Efficient Few-Shot Fine-Tuning for Opinion Summarization. NAACL-HLT (Findings) 2022: 1509-1523 - [c156]Ori Ernst, Avi Caciularu, Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Jacob Goldberger, Ido Dagan:
Proposition-Level Clustering for Multi-Document Summarization. NAACL-HLT 2022: 1765-1779 - [c155]Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Ido Dagan, Yael Amsterdamer
:
Interactive Query-Assisted Summarization via Deep Reinforcement Learning. NAACL-HLT 2022: 2551-2568 - [c154]Sha Li, Mahdi Namazifar, Di Jin, Mohit Bansal, Heng Ji, Yang Liu, Dilek Hakkani-Tur:
Enhancing Knowledge Selection for Grounded Dialogues via Document Semantic Graphs. NAACL-HLT 2022: 2810-2823 - [c153]Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, Mohit Bansal:
FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations. NAACL-HLT 2022: 3238-3253 - [c152]Yonatan Bitton, Nitzan Bitton Guetta, Ron Yosef, Yuval Elovici, Mohit Bansal, Gabriel Stanovsky, Roy Schwartz:
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models. NeurIPS 2022 - [c151]Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin Raffel:
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning. NeurIPS 2022 - [c150]Yi-Lin Sung, Jaemin Cho, Mohit Bansal:
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning. NeurIPS 2022 - [c149]Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal:
TVLT: Textless Vision-Language Transformer. NeurIPS 2022 - [c148]Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu Chang, Mohit Bansal, Heng Ji:
Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners. NeurIPS 2022 - [c147]Zhuofan Ying, Peter Hase, Mohit Bansal:
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives. NeurIPS 2022 - [i169]Jaemin Cho, Abhay Zala, Mohit Bansal:
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers. CoRR abs/2202.04053 (2022) - [i168]Hyounghun Kim, Doo Soon Kim, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Mohit Bansal:
CAISE: Conversational Agent for Image Search and Editing. CoRR abs/2202.11847 (2022) - [i167]Jie Lei, Xinlei Chen, Ning Zhang, Mengjiao Wang
, Mohit Bansal, Tamara L. Berg, Licheng Yu:
LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval. CoRR abs/2203.05465 (2022) - [i166]Archiki Prasad, Peter Hase, Xiang Zhou, Mohit Bansal:
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models. CoRR abs/2203.07281 (2022) - [i165]Jialu Li, Hao Tan, Mohit Bansal:
EnvEdit: Environment Editing for Vision-and-Language Navigation. CoRR abs/2203.15685 (2022) - [i164]Yan-Bo Lin, Jie Lei, Mohit Bansal, Gedas Bertasius:
ECLIPSE: Efficient Long-range Video Retrieval using Sight and Sound. CoRR abs/2204.02874 (2022) - [i163]Swarnadeep Saha, Prateek Yadav, Mohit Bansal:
Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning. CoRR abs/2204.04813 (2022) - [i162]Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, Mohit Bansal:
FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations. CoRR abs/2204.06508 (2022) - [i161]Shiyue Zhang, Benjamin Frey, Mohit Bansal:
How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language. CoRR abs/2204.11909 (2022) - [i160]Shiyue Zhang, Vishrav Chaudhary, Naman Goyal, James Cross, Guillaume Wenzek, Mohit Bansal, Francisco Guzmán:
How Robust is Neural Machine Translation to Language Imbalance in Multilingual Tokenizer Training? CoRR abs/2204.14268 (2022) - [i159]