


Остановите войну!
for scientists:
Shih-Fu Chang
Person information

- affiliation: Columbia University, New York City, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [j113]Mang Ye
, Jianbing Shen
, Xu Zhang
, Pong C. Yuen
, Shih-Fu Chang
:
Augmentation Invariant and Instance Spreading Feature for Softmax Embedding. IEEE Trans. Pattern Anal. Mach. Intell. 44(2): 924-939 (2022) - [i97]Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang:
CLIP-Event: Connecting Text and Images with Event Structures. CoRR abs/2201.05078 (2022) - [i96]Zhecan Wang, Noel Codella, Yen-Chun Chen, Luowei Zhou, Jianwei Yang, Xiyang Dai, Bin Xiao, Haoxuan You, Shih-Fu Chang, Lu Yuan:
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks. CoRR abs/2201.05729 (2022) - [i95]Xudong Lin, Fabio Petroni, Gedas Bertasius, Marcus Rohrbach, Shih-Fu Chang, Lorenzo Torresani:
Learning To Recognize Procedural Activities with Distant Supervision. CoRR abs/2201.10990 (2022) - [i94]Guangxing Han, Jiawei Ma, Shiyuan Huang, Long Chen, Shih-Fu Chang:
Few-Shot Object Detection with Fully Cross-Transformer. CoRR abs/2203.15021 (2022) - [i93]Christopher Thomas, Yipeng Zhang, Shih-Fu Chang:
Fine-Grained Visual Entailment. CoRR abs/2203.15704 (2022) - [i92]Guangxing Han, Jiawei Ma, Shiyuan Huang, Long Chen, Rama Chellappa, Shih-Fu Chang:
Multimodal Few-Shot Object Detection with Meta-Learning Based Cross-Modal Prompting. CoRR abs/2204.07841 (2022) - [i91]Zhecan Wang, Noel Codella, Yen-Chun Chen, Luowei Zhou, Xiyang Dai, Bin Xiao, Jianwei Yang, Haoxuan You, Kai-Wei Chang, Shih-Fu Chang, Lu Yuan:
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks. CoRR abs/2204.10496 (2022) - 2021
- [j112]Yulei Niu
, Hanwang Zhang
, Zhiwu Lu
, Shih-Fu Chang
:
Variational Context: Exploiting Visual and Textual Context for Grounding Referring Expressions. IEEE Trans. Pattern Anal. Mach. Intell. 43(1): 347-359 (2021) - [c350]Long Chen, Wenbo Ma, Jun Xiao, Hanwang Zhang, Shih-Fu Chang:
Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding. AAAI 2021: 1036-1044 - [c349]Yi Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen R. McKeown, Mohit Bansal, Avi Sil:
InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection. ACL/IJCNLP (1) 2021: 1683-1698 - [c348]Sijie Song, Xudong Lin, Jiaying Liu, Zongming Guo, Shih-Fu Chang:
Co-Grounding Networks With Semantic Attention for Referring Expression Comprehension in Videos. CVPR 2021: 1346-1355 - [c347]Xudong Lin, Gedas Bertasius, Jue Wang, Shih-Fu Chang, Devi Parikh, Lorenzo Torresani:
Vx2Text: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs. CVPR 2021: 7005-7015 - [c346]Alireza Zareian, Kevin Dela Rosa, Derek Hao Hu, Shih-Fu Chang:
Open-Vocabulary Object Detection Using Captions. CVPR 2021: 14393-14402 - [c345]Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, Shih-Fu Chang:
Joint Multimedia Event Extraction from Video and Article. EMNLP (Findings) 2021: 74-88 - [c344]Guangxing Han, Yicheng He, Shiyuan Huang, Jiawei Ma, Shih-Fu Chang:
Query Adaptive Few-Shot Object Detection with Heterogeneous Graph Convolutional Networks. ICCV 2021: 3243-3252 - [c343]Brian Chen, Andrew Rouditchenko, Kevin Duarte, Hilde Kuehne, Samuel Thomas, Angie W. Boggust, Rameswar Panda, Brian Kingsbury, Rogério Feris, David Harwath, James R. Glass, Michael Picheny, Shih-Fu Chang:
Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos. ICCV 2021: 7992-8001 - [c342]Jiawei Ma, Hanchen Xie, Guangxing Han, Shih-Fu Chang, Aram Galstyan, Wael Abd-Almageed:
Partner-Assisted Learning for Few-Shot Image Classification. ICCV 2021: 10553-10562 - [c341]Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Zhibo Chen, Shih-Fu Chang:
Uncertainty-Aware Few-Shot Image Classification. IJCAI 2021: 3420-3426 - [c340]Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Nova Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Yi Fung, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed Elsayed, Martha Palmer, Clare R. Voss, Cynthia Schneider, Boyan A. Onyshkevych:
COVID-19 Literature Knowledge Graph Construction and Drug Repurposing Report Generation. NAACL-HLT (Demonstrations) 2021: 66-77 - [c339]Haoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, Xiaodong Yu, Alexander Dong, Zhenhailong Wang, Yi Fung, Piyush Mishra, Qing Lyu, Dídac Surís, Brian Chen, Susan Windisch Brown, Martha Palmer, Chris Callison-Burch, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, Heng Ji:
RESIN: A Dockerized Schema-Guided Cross-document Cross-lingual Cross-media Information Extraction and Event Tracking System. NAACL-HLT (Demonstrations) 2021: 133-143 - [c338]Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang:
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions. NAACL-HLT 2021: 5339-5350 - [c337]Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, Boqing Gong:
VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. NeurIPS 2021: 24206-24221 - [i90]Xudong Lin, Gedas Bertasius, Jue Wang, Shih-Fu Chang, Devi Parikh, Lorenzo Torresani:
VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs. CoRR abs/2101.12059 (2021) - [i89]Sijie Song, Xudong Lin, Jiaying Liu, Zongming Guo, Shih-Fu Chang:
Co-Grounding Networks with Semantic Attention for Referring Expression Comprehension in Videos. CoRR abs/2103.12346 (2021) - [i88]Guangxing Han, Shiyuan Huang, Jiawei Ma, Yicheng He, Shih-Fu Chang:
Meta Faster R-CNN: Towards Accurate Few-Shot Object Detection with Attentive Feature Alignment. CoRR abs/2104.07719 (2021) - [i87]Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, Boqing Gong:
VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. CoRR abs/2104.11178 (2021) - [i86]Brian Chen, Andrew Rouditchenko, Kevin Duarte, Hilde Kuehne, Samuel Thomas, Angie W. Boggust, Rameswar Panda, Brian Kingsbury, Rogério Schmidt Feris, David Harwath, James R. Glass, Michael Picheny, Shih-Fu Chang:
Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos. CoRR abs/2104.12671 (2021) - [i85]Jiawei Ma, Hanchen Xie, Guangxing Han, Shih-Fu Chang, Aram Galstyan, Wael Abd-Almageed:
Partner-Assisted Learning for Few-Shot Image Classification. CoRR abs/2109.07607 (2021) - [i84]Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, Shih-Fu Chang:
Joint Multimedia Event Extraction from Video and Article. CoRR abs/2109.12776 (2021) - [i83]Brian Chen, Ramprasaath R. Selvaraju, Shih-Fu Chang, Juan Carlos Niebles, Nikhil Naik:
PreViTS: Contrastive Pretraining with Video Tracking Supervision. CoRR abs/2112.00804 (2021) - [i82]Zhecan Wang, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang, Shih-Fu Chang:
SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning. CoRR abs/2112.08587 (2021) - [i81]Guangxing Han, Yicheng He, Shiyuan Huang, Jiawei Ma, Shih-Fu Chang:
Query Adaptive Few-Shot Object Detection with Heterogeneous Graph Convolutional Networks. CoRR abs/2112.09791 (2021) - [i80]Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander G. Schwing, Heng Ji:
MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding. CoRR abs/2112.10728 (2021) - 2020
- [j111]Xu Zhang
, Zhaohui H. Sun
, Svebor Karaman
, Shih-Fu Chang:
Discovering Image Manipulation History by Pairwise Relation and Forensics Tools. IEEE J. Sel. Top. Signal Process. 14(5): 1012-1023 (2020) - [c336]Brian Chen, Bo Wu, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang:
General Partial Label Learning via Dual Bipartite Graph Autoencoder. AAAI 2020: 10502-10509 - [c335]Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang, Clare R. Voss, Daniel Napierski, Marjorie Freedman:
GAIA: A Fine-grained Multimedia Knowledge Extraction System. ACL (demo) 2020: 77-86 - [c334]Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, Shih-Fu Chang:
Cross-media Structured Common Space for Multimedia Event Extraction. ACL 2020: 2557-2568 - [c333]Alireza Zareian, Svebor Karaman, Shih-Fu Chang:
Weakly Supervised Visual Semantic Parsing. CVPR 2020: 3733-3742 - [c332]Dídac Surís, Dave Epstein, Heng Ji, Shih-Fu Chang, Carl Vondrick:
Learning to Learn Words from Visual Scenes. ECCV (29) 2020: 434-452 - [c331]Alireza Zareian, Svebor Karaman, Shih-Fu Chang:
Bridging Knowledge Graphs to Generate Scene Graphs. ECCV (23) 2020: 606-623 - [c330]Alireza Zareian, Zhecan Wang, Haoxuan You, Shih-Fu Chang:
Learning Visual Commonsense for Robust Scene Graph Generation. ECCV (23) 2020: 642-657 - [c329]Xudong Lin, Lin Ma, Wei Liu
, Shih-Fu Chang:
Context-Gated Convolution. ECCV (18) 2020: 701-718 - [c328]Di Lu, Ananya Subburathinam, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, Clare R. Voss:
Cross-lingual Structure Transfer for Zero-resource Event Extraction. LREC 2020: 1976-1981 - [c327]Xavier Alameda-Pineda, Miriam Redi, Jahna Otterbacher, Nicu Sebe, Shih-Fu Chang:
FATE/MM 20: 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in MultiMedia. ACM Multimedia 2020: 4761-4762 - [i79]Brian Chen, Bo Wu, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang:
General Partial Label Learning via Dual Bipartite Graph Autoencoder. CoRR abs/2001.01290 (2020) - [i78]Alireza Zareian, Svebor Karaman, Shih-Fu Chang:
Bridging Knowledge Graphs to Generate Scene Graphs. CoRR abs/2001.02314 (2020) - [i77]Alireza Zareian, Svebor Karaman, Shih-Fu Chang:
Weakly Supervised Visual Semantic Parsing. CoRR abs/2001.02359 (2020) - [i76]Tongtao Zhang, Heng Ji, Shih-Fu Chang, Marjorie Freedman:
Training with Streaming Annotation. CoRR abs/2002.04165 (2020) - [i75]Yang Feng, Futang Peng, Xu Zhang, Wei Zhu, Shanfeng Zhang, Howard Zhou, Zhen Li, Tom Duerig, Shih-Fu Chang, Jiebo Luo:
Unifying Specialist Image Embedding into Universal Image Embedding. CoRR abs/2003.03701 (2020) - [i74]Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, Shih-Fu Chang:
Cross-media Structured Common Space for Multimedia Event Extraction. CoRR abs/2005.02472 (2020) - [i73]Bo Xu, Xu Zhang, Zhixin Li, Matthew J. Leotta, Shih-Fu Chang, Jie Shan:
Deep Learning Guided Building Reconstruction from Satellite Imagery-derived Point Clouds. CoRR abs/2005.09223 (2020) - [i72]Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Zhibo Chen, Shih-Fu Chang:
Rethinking Classification Loss Designs for Person Re-identification with a Unified View. CoRR abs/2006.04991 (2020) - [i71]Alireza Zareian, Haoxuan You, Zhecan Wang, Shih-Fu Chang:
Learning Visual Commonsense for Robust Scene Graph Generation. CoRR abs/2006.09623 (2020) - [i70]Qingyun Wang
, Manling Li, Xuan Wang, Nikolaus Nova Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed Elsayed, Martha Palmer, Clare R. Voss, Cynthia Schneider, Boyan A. Onyshkevych:
COVID-19 Literature Knowledge Graph Construction and Drug Repurposing Report Generation. CoRR abs/2007.00576 (2020) - [i69]Bo Wu, Haoyu Qin, Alireza Zareian, Carl Vondrick, Shih-Fu Chang:
Analogical Reasoning for Visually Grounded Language Acquisition. CoRR abs/2007.11668 (2020) - [i68]Long Chen, Wenbo Ma, Jun Xiao, Hanwang Zhang, Wei Liu, Shih-Fu Chang:
Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding. CoRR abs/2009.01449 (2020) - [i67]Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Zhibo Chen, Shih-Fu Chang:
Uncertainty-Aware Few-Shot Image Classification. CoRR abs/2010.04525 (2020) - [i66]Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang:
Weakly-supervised VisualBERT: Pre-training without Parallel Images and Captions. CoRR abs/2010.12831 (2020) - [i65]Hassan Akbari, Hamid Palangi, Jianwei Yang, Sudha Rao, Asli Celikyilmaz, Roland Fernandez, Paul Smolensky, Jianfeng Gao, Shih-Fu Chang:
Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language. CoRR abs/2011.09530 (2020) - [i64]Alireza Zareian, Kevin Dela Rosa, Derek Hao Hu, Shih-Fu Chang:
Open-Vocabulary Object Detection Using Captions. CoRR abs/2011.10678 (2020) - [i63]Shiyuan Huang, Jiawei Ma, Guangxing Han, Shih-Fu Chang:
Task-Adaptive Negative Class Envision for Few-Shot Open-Set Recognition. CoRR abs/2012.13073 (2020)
2010 – 2019
- 2019
- [j110]Hongzhi Li, Joseph G. Ellis, Lei Zhang, Shih-Fu Chang:
Automatic visual pattern mining from categorical image dataset. Int. J. Multim. Inf. Retr. 8(1): 35-45 (2019) - [j109]Yulei Niu, Zhiwu Lu
, Ji-Rong Wen, Tao Xiang, Shih-Fu Chang:
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation. IEEE Trans. Image Process. 28(4): 1720-1731 (2019) - [j108]Xavier Alameda-Pineda, Miriam Redi, Mohammad Soleymani, Nicu Sebe, Shih-Fu Chang, Samuel D. Gosling
:
Special Section on Multimodal Understanding of Social, Affective, and Subjective Attributes. ACM Trans. Multim. Comput. Commun. Appl. 15(1s): 11:1-11:3 (2019) - [c326]Zheng Shou, Xudong Lin, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Shih-Fu Chang, Zhicheng Yan:
DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition. CVPR 2019: 1268-1277 - [c325]Matthew J. Leotta, Chengjiang Long, Bastien Jacquet, Matthieu Zins, Dan Lipsa, Jie Shan
, Bo Xu, Zhixin Li, Xu Zhang, Shih-Fu Chang, Matthew Purri, Jia Xue, Kristin J. Dana:
Urban Semantic 3D Reconstruction From Multiview Satellite Imagery. CVPR Workshops 2019: 1451-1460 - [c324]Yuan Liu, Lin Ma, Yifeng Zhang, Wei Liu
, Shih-Fu Chang:
Multi-Granularity Generator for Temporal Action Proposal. CVPR 2019: 3604-3613 - [c323]Mang Ye, Xu Zhang, Pong C. Yuen, Shih-Fu Chang:
Unsupervised Embedding Learning via Invariant and Spreading Instance Feature. CVPR 2019: 6210-6219 - [c322]Hassan Akbari, Svebor Karaman
, Surabhi Bhargava, Brian Chen, Carl Vondrick, Shih-Fu Chang:
Multi-Level Multimodal Common Semantic Space for Image-Phrase Grounding. CVPR 2019: 12476-12486 - [c321]Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, Clare R. Voss:
Cross-lingual Structure Transfer for Relation and Event Extraction. EMNLP/IJCNLP (1) 2019: 313-325 - [c320]Long Chen, Hanwang Zhang
, Jun Xiao, Xiangnan He, Shiliang Pu, Shih-Fu Chang:
Counterfactual Critic Multi-Agent Training for Scene Graph Generation. ICCV 2019: 4612-4622 - [c319]Philipp Blandfort, Desmond Upton Patton, William R. Frey, Svebor Karaman, Surabhi Bhargava, Fei-Tzin Lee, Siddharth Varia, Chris Kedzie, Michael B. Gaskell, Rossano Schifanella, Kathleen R. McKeown, Shih-Fu Chang:
Multimodal Social Media Analysis for Gang Violence Prevention. ICWSM 2019: 114-124 - [c318]Xu Zhang, Zhuowei Li, Pei-Jie Wang, Katelyn Y. Liao, Shen-Ju Chou, Shih-Fu Chang, Jung-Chi Liao:
One-Shot Learning for Function-Specific Region Segmentation in Mouse Brain. ISBI 2019: 736-740 - [c317]Svebor Karaman
, Xudong Lin, Xuefeng Hu, Shih-Fu Chang:
Unsupervised Rank-Preserving Hashing for Large-Scale Image Retrieval. ICMR 2019: 192-196 - [c316]Shih-Fu Chang, Louis-Philippe Morency, Alexander G. Hauptmann, Alberto Del Bimbo, Cathal Gurrin
, Hayley Hung, Heng Ji, Alan F. Smeaton:
PANEL: Challenges for Multimedia/Multimodal Research in the Next Decade. ACM Multimedia 2019: 2234-2235 - [c315]Xavier Alameda-Pineda, Miriam Redi, L. Elisa Celis, Nicu Sebe, Shih-Fu Chang:
FAT/MM'19: 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia. ACM Multimedia 2019: 2728-2729 - [c314]Xu Zhang, Svebor Karaman, Shih-Fu Chang:
Detecting and Simulating Artifacts in GAN Fake Images. WIFS 2019: 1-6 - [i62]Manling Li, Ying Lin, Ananya Subburathinam, Spencer Whitehead, Xiaoman Pan, Di Lu, Qingyun Wang, Tongtao Zhang, Lifu Huang, Heng Ji, Alireza Zareian, Hassan Akbari, Brian Chen, Bo Wu, Emily Allaway, Shih-Fu Chang, Kathleen R. McKeown, Yixiang Yao, Jennifer Chen, Eric Berquist, Kexuan Sun, Xujun Peng, Ryan Gabbard, Marjorie Freedman, Pedro A. Szekely, T. K. Satish Kumar, Arka Sadhu, Ram Nevatia, Miguel E. Rodríguez, Yifan Wang, Yang Bai, Ali Sadeghian, Daisy Zhe Wang:
GAIA at SM-KBP 2019 - A Multi-media Multi-lingual Knowledge Extraction and Hypothesis Generation System. TAC 2019 - [i61]Zheng Shou, Zhicheng Yan, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Xudong Lin, Shih-Fu Chang:
DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition. CoRR abs/1901.03460 (2019) - [i60]Svebor Karaman, Xudong Lin, Xuefeng Hu, Shih-Fu Chang:
Unsupervised Rank-Preserving Hashing for Large-Scale Image Retrieval. CoRR abs/1903.01545 (2019) - [i59]Mang Ye, Xu Zhang, Pong C. Yuen, Shih-Fu Chang:
Unsupervised Embedding Learning via Invariant and Spreading Instance Feature. CoRR abs/1904.03436 (2019) - [i58]Jiawei Ma, Zheng Shou, Alireza Zareian, Hassan Mansour, Anthony Vetro, Shih-Fu Chang:
CDSA: Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation. CoRR abs/1905.09904 (2019) - [i57]Yulei Niu, Hanwang Zhang, Zhiwu Lu, Shih-Fu Chang:
Variational Context: Exploiting Visual and Textual Context for Grounding Referring Expressions. CoRR abs/1907.03609 (2019) - [i56]Xu Zhang, Svebor Karaman, Shih-Fu Chang:
Detecting and Simulating Artifacts in GAN Fake Images. CoRR abs/1907.06515 (2019) - [i55]Shih-Fu Chang, Alexander G. Hauptmann, Louis-Philippe Morency, Sameer K. Antani, Dick C. A. Bulterman, Carlos Busso, Joyce Yue Chai, Julia Hirschberg, Ramesh C. Jain, Ketan Mayer-Patel, Reuven Meth, Raymond Mooney, Klara Nahrstedt, Shrikanth S. Narayanan, Prem Natarajan, Sharon L. Oviatt, Balakrishnan Prabhakaran, Arnold W. M. Smeulders, Hari Sundaram, Zhengyou Zhang, Michelle X. Zhou:
Report of 2017 NSF Workshop on Multimedia Challenges, Opportunities and Research Roadmaps. CoRR abs/1908.02308 (2019) - [i54]Xudong Lin, Lin Ma, Wei Liu, Shih-Fu Chang:
Context-Gated Convolution. CoRR abs/1910.05577 (2019) - [i53]Xudong Lin, Zheng Shou, Shih-Fu Chang:
LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization. CoRR abs/1910.11285 (2019) - [i52]Dídac Surís, Dave Epstein, Heng Ji, Shih-Fu Chang, Carl Vondrick:
Learning to Learn Words from Narrated Video. CoRR abs/1911.11237 (2019) - [i51]Shiyuan Huang, Xudong Lin, Svebor Karaman, Shih-Fu Chang:
Flow-Distilled IP Two-Stream Networks for Compressed Video Action Recognition. CoRR abs/1912.04462 (2019) - 2018
- [j107]Chitta Baral, Shih-Fu Chang, Brian Curless, Partha Dasgupta, Julia Hirschberg, Anita Jones:
Ask not what your postdoc can do for you ... Commun. ACM 61(1): 42-44 (2018) - [j106]Lamberto Ballan, Shih-Fu Chang, Gang Hua, Thomas Mensink
, Greg Mori, Rahul Sukthankar:
Guest Editorial. Comput. Vis. Image Underst. 173: 1 (2018) - [j105]Yu-Gang Jiang
, Zuxuan Wu, Jun Wang, Xiangyang Xue, Shih-Fu Chang
:
Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 40(2): 352-364 (2018) - [j104]Yinxiao Li, Yan Wang, Yonghao Yue
, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter K. Allen:
Model-Driven Feedforward Prediction for Manipulation of Deformable Objects. IEEE Trans Autom. Sci. Eng. 15(4): 1621-1638 (2018) - [j103]B. Prabhakaran, Yu-Gang Jiang
, Hari Kalva, Shih-Fu Chang
:
Editorial IEEE Transactions on Multimedia Special Section on Video Analytics: Challenges, Algorithms, and Applications. IEEE Trans. Multim. 20(5): 1037 (2018) - [j102]Yu-Gang Jiang
, Zuxuan Wu, Jinhui Tang
, Zechao Li
, Xiangyang Xue, Shih-Fu Chang
:
Modeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification. IEEE Trans. Multim. 20(11): 3137-3147 (2018) - [c313]Long Chen, Hanwang Zhang
, Jun Xiao, Wei Liu
, Shih-Fu Chang:
Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial Embedding Networks. CVPR 2018: 1043-1052 - [c312]Hanwang Zhang
, Yulei Niu, Shih-Fu Chang:
Grounding Referring Expressions in Images by Variational Context. CVPR 2018: 4158-4166 - [c311]Zheng Shou
, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang:
AutoLoc: Weakly-Supervised Temporal Action Localization in Untrimmed Videos. ECCV (16) 2018: 162-179 - [c310]Zheng Shou
, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giró-i-Nieto, Shih-Fu Chang:
Online Detection of Action Start in Untrimmed, Streaming Videos. ECCV (3) 2018: 551-568 - [c309]Spencer Whitehead, Heng Ji, Mohit Bansal, Shih-Fu Chang, Clare R. Voss:
Incorporating Background Knowledge into Video Description Generation. EMNLP 2018: 3992-4001 - [c308]Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, Shih-Fu Chang:
Entity-aware Image Caption Generation. EMNLP 2018: 4013-4023 - [c307]Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang:
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks. ICLR (Poster) 2018 - [c306]Hongzhi Li, Joseph G. Ellis, Lei Zhang, Shih-Fu Chang:
PatternNet: Visual Pattern Mining with Deep Neural Network. ICMR 2018: 291-299 - [c305]Xavier Alameda-Pineda, Miriam Redi, Nicu Sebe
, Shih-Fu Chang, Jiebo Luo:
EE-USAD: ACM MM 2018Workshop on UnderstandingSubjective Attributes of Data focus on Evoked Emotions. ACM Multimedia 2018: 2127-2128 - [c304]Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang:
Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks. NeurIPS 2018: 983-993 - [e9]Shih-Fu Chang:
Frontiers of Multimedia Research. ACM / Morgan & Claypool 2018, ISBN 978-1-97000-107-5 [contents] - [i50]Tongtao Zhang, Ananya Subburathinam, Ge Shi, Lifu Huang, Di Lu, Xiaoman Pan, Manling Li, Boliang Zhang, Qingyun Wang
, Spencer Whitehead, Heng Ji, Alireza Zareian, Hassan Akbari, Brian Chen, Ruiqi Zhong, Steven Shao, Emily Allaway, Shih-Fu Chang, Kathleen R. McKeown, Dongyu Li, Xin Huang, Kexuan Sun, Xujun Peng, Ryan Gabbard, Marjorie Freedman, Mayank Kejriwal, Ram Nevatia, Pedro A. Szekely, T. K. Satish Kumar, Ali Sadeghian, Giacomo Bergami, Sourav Dutta, Miguel E. Rodríguez, Daisy Zhe Wang:
GAIA - A Multi-media Multi-lingual Knowledge Extraction and Hypothesis Generation System. TAC 2018 - [i49]Zheng Shou, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giró-i-Nieto, Shih-Fu Chang:
Online Action Detection in Untrimmed, Streaming Videos - Modeling and Evaluation. CoRR abs/1802.06822 (2018) - [i48]Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, Shih-Fu Chang:
Entity-aware Image Caption Generation. CoRR abs/1804.07889 (2018) - [i47]Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang:
AutoLoc: Weakly-supervised Temporal Action Localization. CoRR abs/1807.08333 (2018) - [i46]Philipp Blandfort, Desmond Patton, William R. Frey, Svebor Karaman, Surabhi Bhargava, Fei-Tzin Lee, Siddharth Varia, Chris Kedzie, Michael B. Gaskell, Rossano Schifanella, Kathleen R. McKeown, Shih-Fu Chang:
Multimodal Social Media Analysis for Gang Violence Prevention. CoRR abs/1807.08465 (2018) - [i45]