


default search action
Judith E. Fan
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j3]Felix J. Binder, Marcelo Gomes Mattar, David Kirsh, Judith E. Fan
:
Humans Select Subgoals That Balance Immediate and Future Cognitive Costs During Physical Assembly. Cogn. Sci. 49(11) (2025)
[c59]Allison Chen
, Sunnie S. Y. Kim
, Amaya Dharmasiri
, Olga Russakovsky
, Judith E. Fan
:
Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities Humans Attribute to Them. CHI Extended Abstracts 2025: 440:1-440:14
[c58]Sean P. Anderson, Lionel Wong, Maddy Bowers, Judith E. Fan:
Consequences of prior experience on visual problem solving. CogSci 2025
[c57]Erik Brockbank, Tobias Gerstenberg, Judith E. Fan, Robert D. Hawkins:
How do we get to know someone? Diagnostic questions for inferring personal traits. CogSci 2025
[c56]Allison Chen, Sunnie S. Y. Kim, Amaya Dharmasiri, Olga Russakovsky, Judith E. Fan:
Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities People Attribute to Them. CogSci 2025
[c55]Junyi Chu, Arnav Verma, Guy Davidson, Robbie Fraser, Judith E. Fan:
Minds in the Making: Cognitive Science and Design Thinking. CogSci 2025
[c54]Junyi Chu, Kristine Zheng, Judith E. Fan:
What makes people think a puzzle is fun to solve? CogSci 2025
[c53]Abdul-Rahim Deeb, Kevin A. Smith, Shari Liu, Judith E. Fan:
Perception as a Foundation for Common-Sense Theories of the World. CogSci 2025
[c52]Judith E. Fan, Kristine Zheng, Benjamin A. Motz, Shayan Doroudi, Ji Son, Candace Thille:
Minds at School: Advancing cognitive science by measuring and modeling human learning in situ. CogSci 2025
[c51]Kiyosu Maeda, Ching-Yi Tsai, Judith E. Fan, Parastoo Abtahi:
Using Gesture and Language to Establish Multimodal Conventions in Collaborative Physical Tasks. CogSci 2025
[c50]Shawn T. Schwartz, Kristine Zheng, Judith E. Fan:
Measuring sustained attention across timescales to predict learning in real-world environments. CogSci 2025
[c49]Alexa R. Tartaglini, Christopher Potts, Judith E. Fan:
Exploring the mechanisms that enable multimodal reasoning about data visualizations in vision-language models. CogSci 2025
[c48]Arnav Verma, Judith E. Fan:
Measuring and predicting variation in the difficulty of questions about data visualizations. CogSci 2025
[c47]Kristine Zheng, Erik Brockbank, Shawn T. Schwartz, David Yeager, Christopher Bryan, Carol S. Dweck, Judith E. Fan:
Linking student psychological orientation, engagement, and learning in college-level introductory data science. CogSci 2025
[c46]Rebecca Zhu, Tabitha Nduku, Joab Ochieng Arieda, Arnav Verma, Judith E. Fan, Michael C. Frank:
Investigating children's performance on object- and picture-based vocabulary assessments in global contexts: Evidence from Kisumu, Kenya. CogSci 2025
[c45]Yael Vinker, Tamar Rott Shaham, Kristine Zheng, Alex Zhao, Judith E. Fan, Antonio Torralba:
SketchAgent: Language-Driven Sequential Sketch Generation. CVPR 2025: 23355-23368
[c44]William P. McCarthy, Saujas Vaduguru, Karl D. D. Willis, Justin Matejka, Judith E. Fan, Daniel Fried, Yewen Pu:
mrCAD: Multimodal Communication to Refine Computer-aided Designs. EMNLP (Findings) 2025: 22905-22921
[i19]William P. McCarthy, Saujas Vaduguru, Karl D. D. Willis, Justin Matejka, Judith E. Fan, Daniel Fried, Yewen Pu:
mrCAD: Multimodal Refinement of Computer-aided Designs. CoRR abs/2504.20294 (2025)
[i18]Arnav Verma, Judith E. Fan:
Measuring and predicting variation in the difficulty of questions about data visualizations. CoRR abs/2505.08031 (2025)
[i17]Arnav Verma, Kushin Mukherjee, Christopher Potts, Elisa Kreiss, Judith E. Fan:
CHART-6: Human-Centered Evaluation of Data Visualization Understanding in Vision-Language Models. CoRR abs/2505.17202 (2025)
[i16]Logan Matthew Cross, Erik Brockbank, Tobias Gerstenberg, Judith E. Fan, Daniel L. K. Yamins, Nick Haber:
Understanding Human Limits in Pattern Recognition: A Computational Model of Sequential Reasoning in Rock, Paper, Scissors. CoRR abs/2508.06503 (2025)
[i15]Allison Chen, Sunnie S. Y. Kim, Angel Franyutti, Amaya Dharmasiri, Kushin Mukherjee, Olga Russakovsky, Judith E. Fan:
Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them. CoRR abs/2510.18039 (2025)
[i14]Alexa R. Tartaglini, Satchel Grant, Daniel Wurgaft, Christopher Potts, Judith E. Fan:
Diagnosing Bottlenecks in Data Visualization Understanding by Vision-Language Models. CoRR abs/2510.21740 (2025)- 2024
[c43]William P. McCarthy
, Justin Matejka
, Karl D. D. Willis
, Judith E. Fan
, Yewen Pu
:
Communicating Design Intent Using Drawing and Text. Creativity & Cognition 2024: 512-519
[c42]Holly Huey
, Mackenzie Leake
, Deepali Aneja
, Matthew Fisher
, Judith E. Fan
:
How do video content creation goals impact which concepts people prioritize for generating B-roll imagery? Creativity & Cognition 2024: 542-549
[c41]Erik Brockbank, Justin Yang, Mishika Govil, Judith E. Fan, Tobias Gerstenberg:
Without his cookies, he's just a monster: a counterfactual simulation model of social explanation. CogSci 2024
[c40]Kartik Chandra, Anne H. K. Harrington, Katherine M. Collins, Christopher J. Kymn, Kushin Mukherjee, Sean P. Anderson, Arnav Verma, Judith E. Fan:
COGGRAPH: Building bridges between cognitive science and computer graphics. CogSci 2024
[c39]William P. McCarthy, Sean P. Anderson, Judith E. Fan:
How does assembling an object affect memory for it? CogSci 2024
[c38]Arnav Verma, Kushin Mukherjee, Christopher Potts, Elisa Kreiss, Judith E. Fan:
Evaluating human and machine understanding of data visualizations. CogSci 2024
[c37]Haoliang Wang, Khaled Jedoui, Rahul Mysore Venkatesh, Felix Jedidja Binder, Josh Tenenbaum, Judith E. Fan, Daniel Yamins, Kevin A. Smith:
Probabilistic simulation supports generalizable intuitive physics. CogSci 2024
[c36]Rahul Mysore Venkatesh, Honglin Chen, Kevin T. Feigelis, Daniel M. Bear, Khaled Jedoui, Klemen Kotar, Felix J. Binder, Wanhee Lee, Sherry Liu, Kevin A. Smith, Judith E. Fan
, Daniel L. K. Yamins:
Understanding Physical Dynamics with Counterfactual World Modeling. ECCV (24) 2024: 368-387
[i13]Yael Vinker, Tamar Rott Shaham, Kristine Zheng, Alex Zhao, Judith E. Fan, Antonio Torralba:
SketchAgent: Language-Driven Sequential Sketch Generation. CoRR abs/2411.17673 (2024)- 2023
[j2]William P. McCarthy, David Kirsh, Judith E. Fan
:
Consistency and Variation in Reasoning About Physical Assembly. Cogn. Sci. 47(12) (2023)
[c35]Felix Jedidja Binder, Logan Matthew Cross, Yoni Friedman, Robert D. Hawkins, Daniel L. K. Yamins, Judith E. Fan:
Advancing Cognitive Science and AI with Cognitive-AI Benchmarking. CogSci 2023
[c34]Felix Jedidja Binder, Marcelo Gomes Mattar, David Kirsh, Judith E. Fan:
Humans choose visual subgoals to reduce cognitive cost. CogSci 2023
[c33]Holly Huey, Lauren Oey, Hannah Lloyd, Judith E. Fan:
How do communicative goals guide which data visualizations people think are effective? CogSci 2023
[c32]Hannah Lloyd, Holly Huey, Erik Brockbank, Lace M. K. Padilla, Judith E. Fan:
What is graph comprehension and how do you measure it? CogSci 2023
[c31]Julio Martinez, Felix Jedidja Binder, Haoliang Wang, Nick Haber, Judith E. Fan, Daniel Yamins:
Measuring and Modeling Physical Intrinsic Motivation. CogSci 2023
[c30]Marcelo Gomes Mattar, Judith E. Fan, Wai Keen Vong, Lionel Wong:
How does the mind discover useful abstractions? CogSci 2023
[c29]Kushin Mukherjee, Xuanchen Lu, Holly Huey, Yael Vinker, Rio Aguina-Kang, Ariel Shamir, Judith E. Fan:
Evaluating machine comprehension of sketch meaning at different levels of abstraction. CogSci 2023
[c28]Kristian Tylén, Mathias Sablé-Meyer, Judith E. Fan, Michelle C. Langley:
Marks and Meanings: new perspectives on the evolution of human symbolic behavior. CogSci 2023
[c27]Xuanchen Lu, Xiaolong Wang, Judith E. Fan:
Learning Dense Correspondences between Photos and Sketches. ICML 2023: 22899-22916
[c26]Kushin Mukherjee, Holly Huey, Xuanchen Lu, Yael Vinker, Rio Aguina-Kang, Ariel Shamir, Judith E. Fan:
SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction. NeurIPS 2023
[c25]Hsiao-Yu Tung, Mingyu Ding, Zhenfang Chen, Daniel Bear, Chuang Gan, Josh Tenenbaum, Dan Yamins, Judith E. Fan, Kevin A. Smith:
Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties. NeurIPS 2023
[i12]Julio Martinez, Felix J. Binder, Haoliang Wang, Nick Haber, Judith E. Fan, Daniel L. K. Yamins:
Measuring and Modeling Physical Intrinsic Motivation. CoRR abs/2305.13452 (2023)
[i11]Hsiao-Yu Tung, Mingyu Ding, Zhenfang Chen, Daniel Bear, Chuang Gan, Joshua B. Tenenbaum, Daniel L. K. Yamins, Judith E. Fan, Kevin A. Smith:
Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties. CoRR abs/2306.15668 (2023)
[i10]Xuanchen Lu, Xiaolong Wang, Judith E. Fan:
Learning Dense Correspondences between Photos and Sketches. CoRR abs/2307.12967 (2023)
[i9]Kushin Mukherjee, Holly Huey, Xuanchen Lu
, Yael Vinker, Rio Aguina-Kang, Ariel Shamir, Judith E. Fan:
SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction. CoRR abs/2312.03035 (2023)
[i8]Rahul Mysore Venkatesh, Honglin Chen, Kevin T. Feigelis, Daniel M. Bear, Khaled Jedoui, Klemen Kotar, Felix J. Binder, Wanhee Lee, Sherry Liu, Kevin A. Smith, Judith E. Fan, Daniel L. K. Yamins:
Counterfactual World Modeling for Physical Dynamics Understanding. CoRR abs/2312.06721 (2023)- 2022
[c24]Erik Brockbank, Haoliang Wang, Justin Yang, Suvir Mirchandani, Erdem Biyik, Dorsa Sadigh, Judith E. Fan:
How do people incorporate advice from artificial agents when making physical judgments? CogSci 2022
[c23]Holly Huey, Bria Long, Justin Yang, Kaylee R. George, Judith E. Fan:
Developmental changes in the semantic part structure of drawn objects. CogSci 2022
[c22]Kushin Mukherjee, Holly Huey, Timothy T. Rogers, Judith E. Fan:
From Images to Symbols: Drawing as a Window into the Mind. CogSci 2022
[c21]Maneesha Nagabandi, Justin Yang, Holly Huey, Judith E. Fan:
Decomposing objects into parts from vision and language. CogSci 2022
[c20]Haoliang Wang, Kelsey R. Allen, Ed Vul, Judith E. Fan:
Generalizing physical prediction by composing forces and objects. CogSci 2022
[c19]Haoliang Wang, Jane Yang, Ronen Tamari, Judith E. Fan:
Communicating understanding of physical dynamics in natural language. CogSci 2022
[c18]Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Josh Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan:
Identifying concept libraries from language about object structure. CogSci 2022
[i7]Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins
, Judith E. Fan:
Identifying concept libraries from language about object structure. CoRR abs/2205.05666 (2022)
[i6]Erik Brockbank, Haoliang Wang, Justin Yang, Suvir Mirchandani, Erdem Biyik, Dorsa Sadigh, Judith E. Fan:
How do people incorporate advice from artificial agents when making physical judgments? CoRR abs/2205.11613 (2022)- 2021
[c17]Felix Jedidja Binder, Marcelo Gomes Mattar, David Kirsh, Judith E. Fan:
Visual scoping operations for physical assembly. CogSci 2021
[c16]Cameron Holdaway, Daniel M. Bear, Samaher Radwan, Michael C. Frank, Daniel L. K. Yamins, Judith E. Fan:
Measuring and predicting variation in the interestingness of physical structures. CogSci 2021
[c15]Sebastian Holt, David Barner, Judith E. Fan:
Improvised Numerals Rely on 1-to-1 Correspondence. CogSci 2021
[c14]Holly Huey, Caren M. Walker, Judith E. Fan:
How do the semantic properties of visual explanations guide causal inference? CogSci 2021
[c13]George Kachergis, Samaher Radwan, Bria Long, Judith E. Fan, Michael Lingelbach, Daniel M. Bear, Daniel L. K. Yamins, Michael C. Frank:
Predicting children's and adults' preferences in physical interactions via physics simulation. CogSci 2021
[c12]William P. McCarthy, Robert D. Hawkins, Haoliang Wang, Cameron Holdaway, Judith E. Fan:
Learning to communicate about shared procedural abstractions. CogSci 2021
[c11]William P. McCarthy, Marcelo Gomes Mattar, David Kirsh, Judith E. Fan:
Connecting perceptual and procedural abstractions in physical construction. CogSci 2021
[c10]Haoliang Wang, Nadia Polikarpova, Judith E. Fan:
Learning part-based abstractions for visual object concepts. CogSci 2021
[c9]Haoliang Wang, Ed Vul, Nadia Polikarpova, Judith E. Fan:
Theory Acquisition as Constraint-Based Program Synthesis. CogSci 2021
[c8]Justin Yang, Judith E. Fan:
Visual communication of object concepts at different levels of abstraction. CogSci 2021
[c7]Daniel Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Yu Tung, R. T. Pramod, Cameron Holdaway, Sirui Tao, Kevin A. Smith, Fan-Yun Sun, Fei-Fei Li, Nancy Kanwisher, Josh Tenenbaum, Dan Yamins, Judith E. Fan:
Physion: Evaluating Physical Prediction from Vision in Humans and Machines. NeurIPS Datasets and Benchmarks 2021
[i5]Justin Yang, Judith E. Fan:
Visual communication of object concepts at different levels of abstraction. CoRR abs/2106.02775 (2021)
[i4]Felix J. Binder, Marcelo M. Mattar, David Kirsh, Judith E. Fan:
Visual scoping operations for physical assembly. CoRR abs/2106.05654 (2021)
[i3]Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiau-Yu Fish Tung, R. T. Pramod, Cameron Holdaway, Sirui Tao, Kevin A. Smith, Fan-Yun Sun, Li Fei-Fei, Nancy Kanwisher, Joshua B. Tenenbaum, Daniel L. K. Yamins, Judith E. Fan:
Physion: Evaluating Physical Prediction from Vision in Humans and Machines. CoRR abs/2106.08261 (2021)
[i2]William P. McCarthy, Robert D. Hawkins, Haoliang Wang, Cameron Holdaway, Judith E. Fan:
Learning to communicate about shared procedural abstractions. CoRR abs/2107.00077 (2021)
[i1]Robert D. Hawkins, Megumi Sano, Noah D. Goodman, Judith E. Fan:
Visual resemblance and communicative context constrain the emergence of graphical conventions. CoRR abs/2109.13861 (2021)- 2020
[c6]William P. McCarthy, David Kirsh, Judith E. Fan:
Learning to build physical structures better over time. CogSci 2020
[c5]Xiaotong (Tone) Xu, Judith E. Fan, Steven P. Dow:
Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work. HCOMP 2020: 178-182
2010 – 2019
- 2019
[c4]Judith E. Fan, Monica Dinculescu, David Ha:
collabdraw: An Environment for Collaborative Sketching with an Artificial Agent. Creativity & Cognition 2019: 556-561- 2018
[j1]Judith E. Fan
, Daniel L. K. Yamins, Nicholas B. Turk-Browne:
Common Object Representations for Visual Production and Recognition. Cogn. Sci. 42(8): 2670-2698 (2018)
[c3]Bria Long, Judith E. Fan, Michael C. Frank:
Drawings as a window into developmental changes in object representations. CogSci 2018- 2015
[c2]Judith E. Fan, Daniel Yamins, Nicholas B. Turk-Browne:
Common object representations for visual recognition and production. CogSci 2015- 2014
[c1]Judith E. Fan, Daniel Yamins, James J. DiCarlo, Nicholas B. Turk-Browne:
Mapping core similarity among visual objects across image modalities. SIGGRAPH Posters 2014: 67:1
Coauthor Index
aka: Felix Jedidja Binder

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-03-05 00:00 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







