default search action
Chee Wee Leong
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j5]Bimasena Putra, Kurniawati Azizah, Candy Olivia Mawalim, Ikhlasul Akmal Hanif, Sakriani Sakti, Chee Wee Leong, Shogo Okada:
MAG-BERT-ARL for Fair Automated Video Interview Assessment. IEEE Access 12: 145188-145205 (2024) - [c39]Andrew Emerson, Arti Ramesh, Patrick Houghton, Vinay Basheerabad, Navaneeth Jawahar, Chee Wee Leong:
Multimodal, Multi-Class Bias Mitigation for Predicting Speaker Confidence. EDM 2024 - 2023
- [j4]Su Shwe Yi Tun, Shogo Okada, Hung-Hsuan Huang, Chee Wee Leong:
Multimodal Transfer Learning for Oral Presentation Assessment. IEEE Access 11: 84013-84026 (2023) - [c38]Hung Le, Sixia Li, Candy Olivia Mawalim, Hung-Hsuan Huang, Chee Wee Leong, Shogo Okada:
Investigating the Effect of Linguistic Features on Personality and Job Performance Predictions. HCI (15) 2023: 370-383 - 2022
- [c37]Andrew Emerson, Patrick Houghton, Ke Chen, Vinay Basheerabad, Rutuja Ubale, Chee Wee Leong:
Predicting User Confidence in Video Recordings with Spatio-Temporal Multimodal Analytics. ICMI Companion 2022: 98-104 - [c36]Ari Sagherian, Suhasini Lingaiah, Mohamed Abouelenien, Chee Wee Leong, Lei Liu, Mengxuan Zhao, Blake Lafuente, Shukang Chen, Yi Qi:
Learning Progression-based Automated Scoring of Visual Models. PETRA 2022: 213-222 - 2021
- [c35]Su Shwe Yi Tun, Shogo Okada, Hung-Hsuan Huang, Chee Wee Leong:
Analysis of Modality-Based Presentation Skills Using Sequential Models. HCI (13) 2021: 358-369 - [c34]Chee Wee Leong, Xianyang Chen, Vinay Basheerabad, Chong Min Lee, Patrick Houghton:
NLP-guided Video Thin-slicing for Automated Scoring of Non-Cognitive, Behavioral Performance Tasks. ICMI 2021: 846-847 - [c33]Roberto Gretter, Marco Matassoni, Daniele Falavigna, A. Misra, Chee Wee Leong, Kate M. Knill, Linlin Wang:
ETLT 2021: Shared Task on Automatic Speech Recognition for Non-Native Children's Speech. Interspeech 2021: 3845-3849 - 2020
- [c32]Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale, Xianyang Chen:
A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task. Fig-Lang@ACL 2020: 18-29 - [c31]Xianyang Chen, Chee Wee Leong, Michael Flor, Beata Beigman Klebanov:
Go Figure! Multi-task transformer-based architecture for metaphor detection using idioms: ETS team in 2020 metaphor shared task. Fig-Lang@ACL 2020: 235-243 - [c30]Haley Lepp, Chee Wee Leong, Katrina Roohr, Michelle P. Martin-Raugh, Vikram Ramanarayanan:
Effect of Modality on Human and Machine Scoring of Presentation Videos. ICMI 2020: 630-634 - [c29]Roberto Gretter, Marco Matassoni, Daniele Falavigna, Keelan Evanini, Chee Wee Leong:
Overview of the Interspeech TLT2020 Shared Task on ASR for Non-Native Children's Speech. INTERSPEECH 2020: 245-249 - [e2]Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee Leong, Anna Feldman, Debanjan Ghosh:
Proceedings of the Second Workshop on Figurative Language Processing, Fig-Lang@ACL 2020, Online, July 9, 2020. Association for Computational Linguistics 2020, ISBN 978-1-952148-12-5 [contents]
2010 – 2019
- 2019
- [c28]Rutuja Ubale, Vikram Ramanarayanan, Yao Qian, Keelan Evanini, Chee Wee Leong, Chong Min Lee:
Native Language Identification from Raw Waveforms Using Deep Convolutional Neural Networks with Attentive Pooling. ASRU 2019: 403-410 - [c27]Chee Wee Leong, Katrina Roohr, Vikram Ramanarayanan, Michelle P. Martin-Raugh, Harrison Kell, Rutuja Ubale, Yao Qian, Zydrune Mladineo, Laura McCulla:
Are Humans Biased in Assessment of Video Interviews? ICMI (Adjunct) 2019: 9:1-9:5 - [i1]Chee Wee Leong, Katrina Roohr, Vikram Ramanarayanan, Michelle P. Martin-Raugh, Harrison Kell, Rutuja Ubale, Yao Qian, Zydrune Mladineo, Laura McCulla:
To Trust, or Not to Trust? A Study of Human Bias in Automated Video Interview Assessments. CoRR abs/1911.13248 (2019) - 2018
- [c26]Chee Wee Leong, Beata Beigman Klebanov, Ekaterina Shutova:
A Report on the 2018 VUA Metaphor Detection Shared Task. Fig-Lang@NAACL-HLT 2018: 56-66 - [c25]Chee Wee Leong, Lei Liu, Rutuja Ubale, Lei Chen:
Toward large-scale automated scoring of scientific visual models. L@S 2018: 23:1-23:4 - [c24]Beata Beigman Klebanov, Chee Wee Leong, Michael Flor:
A Corpus of Non-Native Written English Annotated for Metaphor. NAACL-HLT (2) 2018: 86-91 - [e1]Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee Leong:
Proceedings of the Workshop on Figurative Language Processing, Fig-Lang@NAACL-HLT 2018, New Orleans, Louisiana, 6 June 2018. Association for Computational Linguistics 2018, ISBN 978-1-948087-15-5 [contents] - 2017
- [j3]Klaus Zechner, Su-Youn Yoon, Suma Bhat, Chee Wee Leong:
Comparative evaluation of automated scoring of syntactic competence of non-native speakers. Comput. Hum. Behav. 76: 672-682 (2017) - [c23]Lei Chen, Franklin Zaromb, Zhitong Yang, Chee Wee Leong, Michelle P. Martin-Raugh:
Can a machine pass a situational judgment test measuring personality perception? ACII Workshops 2017: 7-11 - [c22]Lei Chen, Ru Zhao, Chee Wee Leong, Blair Lehman, Gary Feng, Mohammed (Ehsan) Hoque:
Automated video interview judgment on a large-sized corpus collected online. ACII 2017: 504-509 - [c21]Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, Keelan Evanini:
Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls. ICMI 2017: 281-287 - [c20]Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft:
Rushing to Judgement: How do Laypeople Rate Caller Engagement in Thin-Slice Videos of Human-Machine Dialog? INTERSPEECH 2017: 2526-2530 - 2016
- [j2]Lei Chen, Gary Feng, Chee Wee Leong, Jilliam Joe, Christopher Kitchen, Chong Min Lee:
Designing An Automated Assessment of Public Speaking Skills Using Multimodal Cues. J. Learn. Anal. 3(2): 261-281 (2016) - [c19]Beata Beigman Klebanov, Chee Wee Leong, E. Dario Gutiérrez, Ekaterina Shutova, Michael Flor:
Semantic classifications for detection of verb metaphors. ACL (2) 2016 - [c18]Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle P. Martin-Raugh, Harrison Kell, Chong Min Lee, Su-Youn Yoon:
Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm. ICMI 2016: 161-168 - [c17]Lei Chen, Gary Feng, Michelle P. Martin-Raugh, Chee Wee Leong, Christopher Kitchen, Su-Youn Yoon, Blair Lehman, Harrison Kell, Chong Min Lee:
Automatic Scoring of Monologue Video Interviews Using Multimodal Cues. INTERSPEECH 2016: 32-36 - 2015
- [c16]Lei Chen, Chee Wee Leong, Gary Feng, Chong Min Lee, Swapna Somasundaran:
Utilizing multimodal cues to automatically evaluate public speaking performance. ACII 2015: 394-400 - [c15]Vikram Ramanarayanan, Chee Wee Leong, Lei Chen, Gary Feng, David Suendermann-Oeft:
Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring. ICMI 2015: 23-30 - [c14]Chee Wee Leong, Lei Chen, Gary Feng, Chong Min Lee, Matthew Mulholland:
Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and Toolkits. ICMI 2015: 547-556 - [c13]Vikram Ramanarayanan, Lei Chen, Chee Wee Leong, Gary Feng, David Suendermann-Oeft:
An analysis of time-aggregated and time-series features for scoring different aspects of multimodal presentation data. INTERSPEECH 2015: 1373-1377 - 2014
- [c12]Klaus Zechner, Keelan Evanini, Su-Youn Yoon, Lawrence Davis, Xinhao Wang, Lei Chen, Chong Min Lee, Chee Wee Leong:
Automated scoring of speaking items in an assessment for teachers of English as a Foreign Language. BEA@ACL 2014: 134-142 - [c11]Lei Chen, Su-Youn Yoon, Chee Wee Leong, Michelle P. Martin-Raugh, Min Ma:
An Initial Analysis of Structured Video Interviews by Using Multimodal Emotion Detection. ERM4HCI@ICMI 2014: 1-6 - [c10]Lei Chen, Chee Wee Leong, Gary Feng, Chong Min Lee:
Using Multimodal Cues to Analyze MLA'14 Oral Presentation Quality Corpus: Presentation Delivery and Slides Quality. MLA@ICMI 2014: 45-52 - [c9]Lei Chen, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, Chong Min Lee:
Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues. ICMI 2014: 200-203 - 2012
- [c8]Chee Wee Leong, Silviu Cucerzan:
Supporting factual statements with evidence from the web. CIKM 2012: 1153-1162 - 2011
- [c7]Chee Wee Leong, Samer Hassan, Miguel E. Ruiz, Rada Mihalcea:
Improving Query Expansion for Image Retrieval via Saliency and Picturability. CLEF 2011: 137-142 - [c6]Miguel E. Ruiz, Chee Wee Leong, Samer Hassan:
UNT at ImageCLEF 2011: Relevance Models and Salient Semantic Analysis for Image Retrieval. CLEF (Notebook Papers/Labs/Workshop) 2011 - [c5]Chee Wee Leong, Rada Mihalcea:
Going Beyond Text: A Hybrid Image-Text Approach for Measuring Word Relatedness. IJCNLP 2011: 1403-1407 - [c4]Chee Wee Leong, Rada Mihalcea:
Measuring the semantic relatedness between words and images. IWCS 2011 - 2010
- [c3]Chee Wee Leong, Rada Mihalcea, Samer Hassan:
Text Mining for Automatic Image Tagging. COLING (Posters) 2010: 647-655
2000 – 2009
- 2009
- [c2]Chee Wee Leong, Rada Mihalcea:
Explorations in Automatic Image Annotation using Textual Features. Linguistic Annotation Workshop 2009: 56-59 - 2008
- [j1]Rada Mihalcea, Chee Wee Leong:
Toward communicating simple sentences using pictorial representations. Mach. Transl. 22(3): 153-173 (2008) - [c1]Chee Wee Leong, Samer Hassan:
Exploiting Wikipedia for Directional Inferential Text Similarity. ITNG 2008: 686-691
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-31 20:15 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint