default search action
Chengchun Shi
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c15]Jin Zhu, Runzhe Wan, Zhengling Qi, Shikai Luo, Chengchun Shi:
Robust Offline Reinforcement Learning with Heavy-Tailed Rewards. AISTATS 2024: 541-549 - [c14]Ting Li, Chengchun Shi, Qianglin Wen, Yang Sui, Yongli Qin, Chunbo Lai, Hongtu Zhu:
Combining Experimental and Historical Data for Policy Evaluation. ICML 2024 - [i35]Danyang Wang, Chengchun Shi, Shikai Luo, Will Wei Sun:
Pessimistic Causal Reinforcement Learning with Mediators for Confounded Offline Data. CoRR abs/2403.11841 (2024) - [i34]Ting Li, Chengchun Shi, Qianglin Wen, Yang Sui, Yongli Qin, Chunbo Lai, Hongtu Zhu:
Combining Experimental and Historical Data for Policy Evaluation. CoRR abs/2406.00317 (2024) - [i33]Meiling Hao, Pingfan Su, Liyuan Hu, Zoltan Szabo, Qingyuan Zhao, Chengchun Shi:
Forward and Backward State Abstractions for Off-policy Evaluation. CoRR abs/2406.19531 (2024) - [i32]Runpeng Dai, Jianing Wang, Fan Zhou, Shikai Luo, Zhiwei Qin, Chengchun Shi, Hongtu Zhu:
Causal Deepsets for Off-policy Evaluation under Spatial or Spatio-temporal Interferences. CoRR abs/2407.17910 (2024) - 2023
- [j5]Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu:
Jump Interval-Learning for Individualized Decision Making with Continuous Treatments. J. Mach. Learn. Res. 24: 140:1-140:92 (2023) - [c13]Yingying Zhang, Chengchun Shi, Shikai Luo:
Conformal Off-Policy Prediction. AISTATS 2023: 2751-2768 - [c12]Yunzhe Zhou, Zhengling Qi, Chengchun Shi, Lexin Li:
Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning Approach. AISTATS 2023: 6704-6721 - [c11]Jing-Jing Li, Chengchun Shi, Lexin Li, Anne G. E. Collins:
A generalized method for dynamic noise inference in modeling sequential decision-making. CogSci 2023 - [c10]Lin Ge, Jitao Wang, Chengchun Shi, Zhenke Wu, Rui Song:
A Reinforcement Learning Framework for Dynamic Mediation Analysis. ICML 2023: 11050-11097 - [c9]Jitao Wang, Chengchun Shi, Zhenke Wu:
A Robust Test for the Stationarity Assumption in Sequential Decision Making. ICML 2023: 36355-36379 - [c8]Yang Xu, Jin Zhu, Chengchun Shi, Shikai Luo, Rui Song:
An Instrumental Variable Approach to Confounded Off-Policy Evaluation. ICML 2023: 38848-38880 - [c7]Guojun Wu, Ge Song, Xiaoxiang Lv, Shikai Luo, Chengchun Shi, Hongtu Zhu:
DNet: Distributional Network for Distributional Individualized Treatment Effects. KDD 2023: 5215-5224 - [c6]Ting Li, Chengchun Shi, Jianing Wang, Fan Zhou, Hongtu Zhu:
Optimal Treatment Allocation for Efficient Policy Evaluation in Sequential Decision Making. NeurIPS 2023 - [c5]Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun:
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. NeurIPS 2023 - [i31]Yuhe Gao, Chengchun Shi, Rui Song:
Deep Spectral Q-learning with Application to Mobile Health. CoRR abs/2301.00927 (2023) - [i30]Chengchun Shi, Zhengling Qi, Jianing Wang, Fan Zhou:
Value Enhancement of Reinforcement Learning via Efficient and Robust Trust Region Optimization. CoRR abs/2301.02220 (2023) - [i29]Lin Ge, Jitao Wang, Chengchun Shi, Zhenke Wu, Rui Song:
A Reinforcement Learning Framework for Dynamic Mediation Analysis. CoRR abs/2301.13348 (2023) - [i28]Tao Ma, Hengrui Cai, Zhengling Qi, Chengchun Shi, Eric B. Laber:
Sequential Knockoffs for Variable Selection in Reinforcement Learning. CoRR abs/2303.14281 (2023) - [i27]Ting Li, Chengchun Shi, Zhaohua Lu, Yi Li, Hongtu Zhu:
Evaluating Dynamic Conditional Quantile Treatment Effects with Applications in Ridesharing. CoRR abs/2305.10187 (2023) - [i26]Yunzhe Zhou, Chengchun Shi, Lexin Li, Qiwei Yao:
Testing for the Markov Property in Time Series via Deep Conditional Generative Learning. CoRR abs/2305.19244 (2023) - [i25]Zeyu Bian, Chengchun Shi, Zhengling Qi, Lan Wang:
Off-policy Evaluation in Doubly Inhomogeneous Environments. CoRR abs/2306.08719 (2023) - [i24]Jin Zhu, Runzhe Wan, Zhengling Qi, Shikai Luo, Chengchun Shi:
Robust Offline Policy Evaluation and Optimization with Heavy-Tailed Rewards. CoRR abs/2310.18715 (2023) - 2022
- [c4]Chengchun Shi, Masatoshi Uehara, Jiawei Huang, Nan Jiang:
A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision Processes. ICML 2022: 20057-20094 - [i23]Chengchun Shi, Runzhe Wan, Ge Song, Shikai Luo, Rui Song, Hongtu Zhu:
A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided Markets. CoRR abs/2202.10574 (2022) - [i22]Chengchun Shi, Jin Zhu, Ye Shen, Shikai Luo, Hongtu Zhu, Rui Song:
Off-Policy Confidence Interval Estimation with Confounded Markov Decision Process. CoRR abs/2202.10589 (2022) - [i21]Shikai Luo, Ying Yang, Chengchun Shi, Fang Yao, Jieping Ye, Hongtu Zhu:
Policy Evaluation for Temporal and/or Spatial Dependent Experiments in Ride-sourcing Platforms. CoRR abs/2202.10887 (2022) - [i20]Chengchun Shi, Shikai Luo, Hongtu Zhu, Rui Song:
Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons. CoRR abs/2202.13163 (2022) - [i19]Mengbing Li, Chengchun Shi, Zhenke Wu, Piotr Fryzlewicz:
Reinforcement Learning in Possibly Nonstationary Environments. CoRR abs/2203.01707 (2022) - [i18]Yingying Zhang, Chengchun Shi, Shikai Luo:
Conformal Off-Policy Prediction. CoRR abs/2206.06711 (2022) - [i17]Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun:
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. CoRR abs/2207.13081 (2022) - [i16]Jiayi Wang, Zhengling Qi, Chengchun Shi:
Blessing from Experts: Super Reinforcement Learning in Confounded Environments. CoRR abs/2209.15448 (2022) - [i15]Yunzhe Zhou, Zhengling Qi, Chengchun Shi, Lexin Li:
Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning Approach. CoRR abs/2210.14420 (2022) - [i14]Liyuan Hu, Mengbing Li, Chengchun Shi, Zhenke Wu, Piotr Fryzlewicz:
Doubly Inhomogeneous Reinforcement Learning. CoRR abs/2211.03983 (2022) - [i13]Masatoshi Uehara, Chengchun Shi, Nathan Kallus:
A Review of Off-Policy Evaluation in Reinforcement Learning. CoRR abs/2212.06355 (2022) - [i12]Yang Xu, Chengchun Shi, Shikai Luo, Lan Wang, Rui Song:
Quantile Off-Policy Evaluation via Deep Conditional Generative Learning. CoRR abs/2212.14466 (2022) - [i11]Yang Xu, Jin Zhu, Chengchun Shi, Shikai Luo, Rui Song:
An Instrumental Variable Approach to Confounded Off-Policy Evaluation. CoRR abs/2212.14468 (2022) - 2021
- [j4]Chengchun Shi, Tianlin Xu, Wicher Bergsma, Lexin Li:
Double Generative Adversarial Networks for Conditional Independence Testing. J. Mach. Learn. Res. 22: 285:1-285:32 (2021) - [j3]Chengchun Shi, Shikai Luo, Hongtu Zhu, Rui Song:
An Online Sequential Test for Qualitative Treatment Effects. J. Mach. Learn. Res. 22: 286:1-286:51 (2021) - [c3]Chengchun Shi, Runzhe Wan, Victor Chernozhukov, Rui Song:
Deeply-Debiased Off-Policy Interval Estimation. ICML 2021: 9580-9591 - [c2]Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu:
Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings. NeurIPS 2021: 15285-15300 - [i10]Chengchun Shi, Runzhe Wan, Victor Chernozhukov, Rui Song:
Deeply-Debiased Off-Policy Interval Estimation. CoRR abs/2105.04646 (2021) - [i9]Runzhe Wan, Sheng Zhang, Chengchun Shi, Shikai Luo, Rui Song:
Pattern Transfer Learning for Reinforcement Learning in Order Dispatching. CoRR abs/2105.13218 (2021) - [i8]Chengchun Shi, Yunzhe Zhou, Lexin Li:
Testing Directed Acyclic Graph via Structural, Supervised and Generative Adversarial Learning. CoRR abs/2106.01474 (2021) - [i7]Chengchun Shi, Masatoshi Uehara, Nan Jiang:
A Minimax Learning Approach to Off-Policy Evaluation in Partially Observable Markov Decision Processes. CoRR abs/2111.06784 (2021) - [i6]Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu:
Jump Interval-Learning for Individualized Decision Making. CoRR abs/2111.08885 (2021) - 2020
- [j2]Chengchun Shi, Wenbin Lu, Rui Song:
Breaking the Curse of Nonregularity with Subagging - Inference of the Mean Outcome under Optimal Treatment Regimes. J. Mach. Learn. Res. 21: 176:1-176:67 (2020) - [c1]Chengchun Shi, Runzhe Wan, Rui Song, Wenbin Lu, Ling Leng:
Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making. ICML 2020: 8807-8817 - [i5]Chengchun Shi, Sheng Zhang, Wenbin Lu, Rui Song:
Statistical Inference of the Value Function for Reinforcement Learning in Infinite Horizon Settings. CoRR abs/2001.04515 (2020) - [i4]Chengchun Shi, Xiaoyu Wang, Shikai Luo, Rui Song, Hongtu Zhu, Jieping Ye:
A Reinforcement Learning Framework for Time-Dependent Causal Effects Evaluation in A/B Testing. CoRR abs/2002.01711 (2020) - [i3]Chengchun Shi, Runzhe Wan, Rui Song, Wenbin Lu, Ling Leng:
Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making. CoRR abs/2002.01751 (2020) - [i2]Chengchun Shi, Tianlin Xu, Wicher Bergsma, Lexin Li:
Double Generative Adversarial Networks for Conditional Independence Testing. CoRR abs/2006.02615 (2020) - [i1]Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu:
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space. CoRR abs/2010.15963 (2020)
2010 – 2019
- 2019
- [j1]Chengchun Shi, Wenbin Lu, Rui Song:
Determining the Number of Latent Factors in Statistical Multi-Relational Learning. J. Mach. Learn. Res. 20: 23:1-23:38 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-05 20:59 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint