


default search action
52nd SIGGRAPH 2025: Vancouver, BC, Canada - Conference Paper Track
- Ginger Alford, Hao (Richard) Zhang, Adriana Schulz:

Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH Conference Papers 2025, Vancouver, BC, Canada, August 10-14, 2025. ACM 2025, ISBN 979-8-4007-1540-2
Linear & Non-Linear AI Designs
- Lanjiong Li, Guanhua Zhao, Lingting Zhu, Zeyu Cai, Lequan Yu, Jian Zhang, Zeyu Wang:

AssetDropper: Asset Extraction via Diffusion Models with Reward-Driven Optimization. 1:1-1:11 - Marcelo Sandoval-Castañeda, Bryan C. Russell, Josef Sivic, Gregory Shakhnarovich, Fabian Caba Heilbron:

EditDuet: A Multi-Agent System for Video Non-Linear Editing. 2:1-2:11
Sample and Simulate: Make Some Noise
- Tianyu Huang, Jingwang Ling, Shuang Zhao, Feng Xu:

Guiding-Based Importance Sampling for Walk on Stars. 3:1-3:12
Stabilize and Personalize Your Pixels
- Luozhou Wang, Ziyang Mai, Guibao Shen, Yixun Liang, Xin Tao, Pengfei Wan, Di Zhang, Yijun Li, Ying-Cong Chen:

Motion Inversion for Video Customization. 4:1-4:12 - Zinuo You, Stamatios Georgoulis, Anpei Chen, Siyu Tang, Dengxin Dai:

GaVS: 3D-Grounded Video Stabilization via Temporally-Consistent Local Reconstruction and Rendering. 5:1-5:12 - Or Patashnik, Rinon Gal, Daniil Ostashev, Sergey Tulyakov, Kfir Aberman, Daniel Cohen-Or:

Nested Attention: Semantic-aware Attention Values for Concept Personalization. 6:1-6:12 - Rameen Abdal, Or Patashnik, Ivan Skorokhodov, Willi Menapace, Aliaksandr Siarohin, Sergey Tulyakov, Daniel Cohen-Or, Kfir Aberman:

Dynamic Concepts Personalization from Single Videos. 7:1-7:9 - Shiyi Zhang, Junhao Zhuang, Zhaoyang Zhang, Ying Shan, Yansong Tang:

FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios. 8:1-8:11
This is Fluid Simulation
- Jingrui Xing, Bin Wang, Mengyu Chu, Baoquan Chen:

Gaussian Fluids: A Grid-Free Fluid Solver based on Gaussian Spatial Representation. 9:1-9:11 - Fumiya Narita, Nimiko Ochiai, Takashi Kanai, Ryoichi Ando:

Quadtree Tall Cells for Eulerian Liquid Simulation. 10:1-10:11
All About Motion & Deformation
- Naoki Agata, Takeo Igarashi:

Motion Control via Metric-Aligning Motion Matching. 11:1-11:12 - Bowen Zheng, Ke Chen, Yuxin Yao, Zijiao Zeng, Xinwei Jiang, He Wang, Joan Lasenby, Xiaogang Jin:

AutoKeyframe: Autoregressive Keyframe Generation for Human Motion Synthesis and Editing. 12:1-12:12 - Inbar Gat, Sigal Raab, Guy Tevet, Yuval Reshef, Amit Haim Bermano, Daniel Cohen-Or:

AnyTop: Character Animation Diffusion with Any Topology. 13:1-13:10 - Alvin Shi, Haomiao Wu, Theodore Kim:

Hyper-Dimensional Deformation Simulation. 14:1-14:11
Moving, Seeing, Touching & Eating in VR
- Rafael Wampfler, Chen Yang, Dillon Elste, Nikola Kovacevic, Philine Witzig, Markus Gross:

A Platform for Interactive AI Character Experiences. 15:1-15:11 - Fengqi Liu, Longji Huang, Zhengyu Huang, Zeyu Wang:

Learning to Draw Is Learning to See: Analyzing Eye Tracking Patterns for Assisted Observational Drawing. 16:1-16:11 - Zhiming Hu, Daniel F. B. Haeufle, Syn Schmitt, Andreas Bulling:

HOIGaze: Gaze Estimation During Hand-Object Interactions in Extended Reality Exploiting Eye-Hand-Head Coordination. 17:1-17:10 - Dongheun Han, Byungmin Kim, RoUn Lee, KyeongMin Kim, Hyoseok Hwang, HyeongYeop Kang:

ForceGrip: Reference-Free Curriculum Learning for Realistic Grip Force Control in VR Hand Manipulation. 18:1-18:11 - Qingqin Liu, Ziqi Fang, Jiayi Wu, Shaoyu Cai, Jianhui Yan, Tiande Mo, Shuk Ching Chan, Kening Zhu:

VirCHEW Reality: On-Face Kinesthetic Feedback for Enhancing Food-Intake Experience in Virtual Reality. 19:1-19:13
Reconstruct it All
- Nicolás Violante, Andreas Meuleman, Alban Gauthier, Frédo Durand, Thibault Groueix, George Drettakis:

Splat and Replace: 3D Reconstruction with Repetitive Elements. 20:1-20:12 - Ziyi Zhang, Nicolas Roussel, Thomas Müller, Tizian Zeltner, Merlin Nimier-David, Fabrice Rousselle, Wenzel Jakob:

Radiance Surfaces: Optimizing Surface Representations with a 5D Radiance Field Loss. 21:1-21:10
What's the Point?
- Maximilian Kohlbrenner, Marc Alexa:

A Polyhedral Construction of Empty Spheres in Discrete Distance Fields. 22:1-22:10
Cloth & Other Th in Th ings
- Diyang Zhang, Zhendong Wang, Zegao Liu, Xinming Pei, Weiwei Xu, Huamin Wang:

Physics-inspired Estimation of Optimal Cloth Mesh Resolution. 23:1-23:11 - Chengzhu He, Zhendong Wang, Zhaorui Meng, Junfeng Yao, Shihui Guo, Huamin Wang:

Automated Task Scheduling for Cloth and Deformable Body Simulations in Heterogeneous Computing Environments. 24:1-24:11 - Yue Chang, Mengfei Liu, Zhecheng Wang, Peter Yichen Chen, Eitan Grinspun:

Lifting the Winding Number: Precise Discontinuities in Neural Fields for Physics Simulation. 25:1-25:11
Get a Head
- Yiqian Wu, Malte Prinzler, Xiaogang Jin, Siyu Tang:

Text-based Animatable 3D Avatars with Morphable Model Alignment. 26:1-26:11 - Yisheng He, Xiaodong Gu, Xiaodan Ye, Chao Xu, Zhengyi Zhao, Yuan Dong, Weihao Yuan, Zilong Dong, Liefeng Bo:

LAM: Large Avatar Model for One-shot Animatable Gaussian Head. 27:1-27:13 - Tingting Liao, Yujian Zheng, Yuliang Xiu, Adilbek Karmanov, Liwen Hu, Leyang Jin, Hao Li:

SOAP: Style-Omniscient Animatable Portraits. 28:1-28:11 - Youyang Du, Lu Wang, Beibei Wang:

Facial Microscopic Structures Synthesis from a Single Unconstrained Image. 29:1-29:11
Learning and Shapes
- Changhao Li, Yu Xin, Xiaowei Zhou, Ariel Shamir, Hao Zhang, Ligang Liu, Ruizhen Hu:

MASH: Masked Anchored SpHerical Distances for 3D Shape Representation and Generation. 30:1-30:11 - Chunyi Sun, Junlin Han, Runjia Li, Weijian Deng, Dylan Campbell, Stephen Gould:

Unsupervised Decomposition of 3D Shapes into Expressive and Editable Extruded Profile Primitives. 31:1-31:10 - Si-Tong Wei, Rui-Huan Wang, Chuan-Zhi Zhou, Baoquan Chen, Peng-Shuai Wang:

OctGPT: Octree-based Multiscale Autoregressive Models for 3D Shape Generation. 32:1-32:11
Beautiful Effects
- Kei Iwasaki, Yoshinori Dobashi:

Spherical Lighting with Spherical Harmonics Hessian. 33:1-33:10 - Jiawei Huang, Shaokun Zheng, Kun Xu, Yoshifumi Kitamura, Jiaping Wang:

Guided Lens Sampling for Efficient Monte Carlo Circle-of-Confusion Rendering. 34:1-34:10 - Jeffrey Liu, Daqi Lin, Markus Kettunen, Chris Wyman, Ravi Ramamoorthi:

Reservoir Splatting for Temporal Path Resampling and Motion Blur. 35:1-35:11 - Xuejun Hu, Jinfan Lu, Kun Xu:

Kernel Predicting Neural Shadow Maps. 36:1-36:10 - Naoto Shirashima, Hideki Todo, Yuki Yamaoka, Shizuo Kaji, Kunihiko Kobayashi, Haruna Shimotahira, Yonghao Yue:

Stroke Transfer for Participating Media. 37:1-37:12 - Venkataram Edavamadathil Sivaram, Ravi Ramamoorthi, Tzu-Mao Li:

Modeling and Rendering Glow Discharge. 38:1-38:11
Interactive Reality & Perception
- Karran Pandey, Anita Hu, Clement Fuji Tsang, Or Perel, Karan Singh, Maria Shugrina:

Painting with 3D Gaussian Splat Brushes. 39:1-39:10 - Kenneth Chen, Nathan Matsuda, Jon McElvain, Yang Zhao, Thomas Wan, Qi Sun, Alexandre Chapiro:

What is HDR? Perceptual Impact of Luminance and Contrast in Immersive Displays. 40:1-40:11 - Avinab Saha, Yu-Chih Chen, Jean-Charles Bazin, Christian Häne, Ioannis Katsavounidis, Alexandre Chapiro, Alan Bovik:

FaceExpressions-70k: A Dataset of Perceived Expression Differences. 41:1-41:11 - Sophie Kergaßner, Taimoor Tariq, Piotr Didyk:

Towards Understanding Depth Perception in Foveated Rendering. 42:1-42:9
Tailor Made
- Dewen Guo, Zhendong Wang, Zegao Liu, Sheng Li, Guoping Wang, Yin Yang, Huamin Wang:

Fast Physics-Based Modeling of Knots and Ties using Templates. 43:1-43:9 - Zizhou Huang, Chrystiano Araújo, Andrew Kunz, Denis Zorin, Daniele Panozzo, Victor Zordan:

Intersection-Free Garment Retargeting. 44:1-44:12 - Anran Qi, Nico Pietroni, Maria Korosteleva, Olga Sorkine-Hornung, Adrien Bousseau:

Rags2Riches: Computational Garment Reuse. 45:1-45:11 - Yuki Tatsukawa, Anran Qi, I-Chao Shen, Takeo Igarashi:

GarmentImage: Raster Encoding of Garment Sewing Patterns with Diverse Topologies. 46:1-46:11 - Ren Li, Cong Cao, Corentin Dumery, Yingxuan You, Hao Li, Pascal Fua:

Single View Garment Reconstruction Using Diffusion Mapping Via Pattern Coordinates. 47:1-47:11
Illuminating Light
- Chong Zeng, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong:

RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination. 48:1-48:11 - Shaohua Mo, Chuankun Zheng, Zihao Lin, Dianbing Xi, Qi Ye, Rui Wang, Hujun Bao, Yuchi Huo:

Dual-Band Feature Fusion for Neural Global Illumination with Multi-Frequency Reflections. 49:1-49:11 - Zhi Zhou, Chao Li, Zhenyuan Zhang, Mingcong Tang, Zibin Li, Shuhang Luan, Zhangjin Huang:

Gaussian Compression for Precomputed Indirect Illumination. 50:1-50:10 - Pedro Figueirêdo, Qihao He, Steve Bako, Nima Khademi Kalantari:

Neural Importance Sampling of Many Lights. 51:1-51:10
Motion from X
- Sarah Taylor, Salvador Medina, Jonathan Windle, Erica Alcusa Sáez, Iain A. Matthews:

xADA: Controllable and Expressive Audio-Driven Animation. 52:1-52:11 - Anindita Ghosh, Bing Zhou, Rishabh Dabral, Jian Wang, Vladislav Golyanik, Christian Theobalt, Philipp Slusallek, Chuan Guo:

DuetGen: Music Driven Two-Person Dance Generation via Hierarchical Masked Modeling. 53:1-53:11 - Zhiping Qiu, Yitong Jin, Yuan Wang, Yi Shi, Chao Tan, Chongwu Wang, Xiaobing Li, Feng Yu, Tao Yu, Qionghai Dai:

ELGAR: Expressive Cello Performance Motion Generation for Audio Rendition. 54:1-54:9 - Bohong Chen, Yumeng Li, Youyi Zheng, Yao-Xiang Ding, Kun Zhou:

Motion-example-controlled Co-speech Gesture Generation Leveraging Large Language Models. 55:1-55:12 - Linjun Wu, Xiangjun Tang, Jingyuan Cong, He Wang, Bo Hu, Xu Gong, Songnan Li, Yuchen Liao, Yiqian Wu, Chen Liu, Xiaogang Jin:

Semantically Consistent Text-to-Motion with Unsupervised Styles. 56:1-56:10
Cages, Deformation, & Interpolation
- Alon Feldman, Mirela Ben-Chen:

On Planar Shape Interpolation With Logarithmic Metric Blending. 57:1-57:10 - Dong Xiao, Renjie Chen:

Flexible 3D Cage-based Deformation via Green Coordinates on Bézier Patches. 58:1-58:10 - Michal Edelstein, Hsueh-Ti Derek Liu, Mirela Ben-Chen:

CageNet: A Meta-Framework for Learning on Wild Meshes. 59:1-59:11
Video Generation
- Qinghe Wang, Yawen Luo, Xiaoyu Shi, Xu Jia, Huchuan Lu, Tianfan Xue, Xintao Wang, Pengfei Wan, Di Zhang, Kun Gai:

CineMaster: A 3D-Aware and Controllable Framework for Cinematic Text-to-Video Generation. 60:1-60:10 - Jinbo Xing, Long Mai, Cusuh Ham, Jiahui Huang, Aniruddha Mahapatra, Chi-Wing Fu, Tien-Tsin Wong, Feng Liu:

MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation. 61:1-61:11 - Zekai Gu, Rui Yan, Jiahao Lu, Peng Li, Zhiyang Dou, Chenyang Si, Zhen Dong, Qifeng Liu, Cheng Lin, Ziwei Liu, Wenping Wang, Yuan Liu:

Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control. 62:1-62:12 - Xiuli Bi, Jianfei Yuan, Bo Liu, Yong Zhang, Xiaodong Cun, Chi-Man Pun, Bin Xiao:

Mobius: Text to Seamless Looping Video Generation via Latent Shift. 63:1-63:10 - Sihui Ji, Hao Luo, Xi Chen, Yuanpeng Tu, Yiyang Wang, Hengshuang Zhao:

LayerFlow: A Unified Model for Layer-aware Video Generation. 64:1-64:10 - Yongtao Ge, Kangyang Xie, Guangkai Xu, Li Ke, Mingyu Liu, Longtao Huang, Hui Xue, Hao Chen, Chunhua Shen:

Generative Video Matting. 65:1-65:10
Differentiable & Inverse Rendering
- Kai Yan, Cheng Zhang, Sébastien Speierer, Guangyan Cai, Yufeng Zhu, Zhao Dong, Shuang Zhao:

Image-space Adaptive Sampling for Fast Inverse Rendering. 66:1-66:11 - Jeongmin Gu, Bochang Moon:

James-Stein Gradient Combiner for Inverse Monte Carlo Rendering. 67:1-67:10
Do it with Style & Fashion
- Yuxuan Zhang, Yirui Yuan, Yiren Song, Jiaming Liu:

StableMakeup: When Real-World Makeup Transfer Meets Diffusion Model. 68:1-68:9 - Sihui Ji, Yiyang Wang, Xi Chen, Xiaogang Xu, Hao Luo, Hengshuang Zhao:

FashionComposer: Compositional Fashion Image Generation. 69:1-69:10 - Liyuan Zhu, Shengqu Cai, Shengyu Huang, Gordon Wetzstein, Naji Khosravan, Iro Armeni:

Scene-Level Appearance Transfer with Semantic Correspondences. 70:1-70:11 - Ipek Oztas, Duygu Ceylan, Aysegul Dundar:

3D Stylization via Large Reconstruction Model. 71:1-71:11 - Peiying Zhang, Nanxuan Zhao, Jing Liao:

Style Customization of Text-to-Vector Generation with Image Diffusion Priors. 72:1-72:11 - Junhao Zhuang, Lingen Li, Xuan Ju, Zhaoyang Zhang, Chun Yuan, Ying Shan:

Cobra: Efficient Line Art COlorization with BRoAder References. 73:1-73:11
Reconstruction & Neural Fields
- Mingyang Song, Yang Zhang, Marko Mihajlovic, Siyu Tang, Markus Gross, Tunç Ozan Aydin:

Spline Deformation Field. 74:1-74:10
Stable & Accurate Elasticity
- Jerry Hsu, Tongtong Wang, Kui Wu, Cem Yuksel:

Stable Cosserat Rods. 75:1-75:10 - Leticia Mattos Da Silva, Silvia Sellán, Natalia Pacheco-Tallaj, Justin Solomon:

Variational Elastodynamic Simulation. 76:1-76:11
Diffusion & Generation
- Xueqi Ma, Yilin Liu, Tianlong Gao, Qirui Huang, Hui Huang:

CLR-Wire: Towards Continuous Latent Representations for 3D Curve Wireframe Generation. 77:1-77:11 - Jionghao Wang, Cheng Lin, Yuan Liu, Rui Xu, Zhiyang Dou, Xiaoxiao Long, Haoxiang Guo, Taku Komura, Wenping Wang, Xin Li:

PDT: Point Distribution Transformation with Diffusion Models. 78:1-78:11 - Xiao-Lei Li, Hao-Xiang Chen, Yanni Zhang, Kai Ma, Alan Zhao, Tai-Jiang Mu, Hao-Xiang Guo, Ran Zhang:

RELATE3D: REfocusing Latent Adapter for Targeted local Enhancement and Editing in 3D Generation. 79:1-79:12 - Yansong Qu, Dian Chen, Xinyang Li, Xiaofan Li, Shengchuan Zhang, Liujuan Cao, Rongrong Ji:

Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting. 80:1-80:12 - Peng Li, Suizhi Ma, Jialiang Chen, Yuan Liu, Congyi Zhang, Wei Xue, Wenhan Luo, Alla Sheffer, Wenping Wang, Yike Guo:

CMD: Controllable Multiview Diffusion for 3D Editing and Progressive Generation. 81:1-81:10 - Ellie Arar, Yarden Frenkel, Daniel Cohen-Or, Ariel Shamir, Yael Vinker:

SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation. 82:1-82:12
Monte-Carlo Rendering & Sampling
- Corentin Salaün, Martin Bálint, Laurent Belcour, Eric Heitz, Gurprit Singh, Karol Myszkowski:

Histogram Stratification for Spatio-Temporal Reservoir Sampling. 83:1-83:10 - Xiaochun Tong, Toshiya Hachisuka:

Practical Stylized Nonlinear Monte Carlo Rendering. 84:1-84:11
Robots in the World
- Lucas N. Alegre, Agon Serifi, Ruben Grandia, David Müller, Espen Knoop, Moritz Bächer:

AMOR: Adaptive Character Control through Multi-Objective Reinforcement Learning. 85:1-85:11 - Sean Memery, Kevin Denamganaï, Jiaxin Zhang, Zehai Tu, Yiwen Guo, Kartic Subr:

CueTip: An Interactive and Explainable Physics-aware Pool Assistant. 86:1-86:11 - Mingfeng Tang, Ningna Wang, Ziyuan Xie, Jianwei Hu, Ke Xie, Xiaohu Guo, Hui Huang:

Aerial Path Online Planning for Urban Scene Updation. 87:1-87:11
Avatars
- Yifang Pan, Karan Singh, Luiz Gustavo Hafemann:

Model See Model Do: Speech-Driven Facial Animation with Style Control. 88:1-88:10 - Luchuan Song, Yang Zhou, Zhan Xu, Yi Zhou, Deepali Aneja, Chenliang Xu:

StreamME: Simplify 3D Gaussian Avatar within Live Stream. 89:1-89:10 - Shivangi Aneja, Sebastian Weiss, Irene Baeza, Prashanth Chandran, Gaspard Zoss, Matthias Nießner, Derek Bradley:

ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions. 90:1-90:11 - Forrest Iandola, Stanislav Pidhorskyi, Igor Santesteban, Divam Gupta, Anuj Pahuja, Nemanja Bartolovic, Frank Yu, Emanuel Garbin, Tomas Simon, Shunsuke Saito:

SqueezeMe: Mobile-Ready Distillation of Gaussian Full-Body Avatars. 91:1-91:11
Deep Image Editing
- Omer Dahary, Yehonathan Cohen, Or Patashnik, Kfir Aberman, Daniel Cohen-Or:

Be Decisive: Noise-Induced Layouts for Multi-Subject Generation. 92:1-92:12 - Amirhossein Alimohammadi, Aryan Mikaeili, Sauradip Nag, Negar Hassanpour, Andrea Tagliasacchi, Ali Mahdavi-Amiri:

Cora: Correspondence-aware image editing using few step diffusion. 93:1-93:11 - Yen-Chi Cheng, Krishna Kumar Singh, Jae Shin Yoon, Alexander G. Schwing, Liang-Yan Gui, Matheus Gadelha, Paul Guerrero, Nanxuan Zhao:

3D-Fixup: Advancing Photo Editing with 3D Priors. 94:1-94:10 - Aleksandar Cvejic, Abdelrahman Eldesokey, Peter Wonka:

PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models. 95:1-95:11 - Mia Tang, Yael Vinker, Chuan Yan, Lvmin Zhang, Maneesh Agrawala:

Instance Segmentation of Scene Sketches Using Natural Image Priors. 96:1-96:10
Directions in Parameterization
- Jiabao Brad Wang, Amir Vaxman:

Power-Linear Polar Directional Fields. 97:1-97:10 - Yuan-Yuan Cheng, Qing Fang, Ligang Liu, Xiao-Ming Fu:

Divide-and-Conquer Embedding. 98:1-98:10
Splatting Bigger, Faster, and Adaptive
- Lei Lan, Tianjia Shao, Zixuan Lu, Yu Zhang, Chenfanfu Jiang, Yin Yang:

3DGS2: Near Second-order Converging 3D Gaussian Splatting. 99:1-99:10 - Xijie Yang, Linning Xu, Lihan Jiang, Dahua Lin, Bo Dai:

Virtualized 3D Gaussians: Flexible Cluster-based Level-of-Detail System for Real-Time Rendering of Composed Scenes. 100:1-100:11 - Rong Liu, Dylan Sun, Meida Chen, Yue Wang, Andrew Feng:

Deformable Beta Splatting. 101:1-101:11 - Yunxiang Zhang, Bingxuan Li, Alexandr Kuznetsov, Akshay Jindal, Stavros Diolatzis, Kenneth Chen, Anton Sochenov, Anton Kaplanyan, Qi Sun:

Image-GS: Content-Adaptive Image Representation via 2D Gaussians. 102:1-102:11
CAD & B-Reps
- Mingi Lee, Dongsu Zhang, Clément Jambon, Young Min Kim:

BrepDiff: Single-Stage B-rep Diffusion Model. 103:1-103:11 - Pu Li, Wenhao Zhang, Jinglu Chen, Dongming Yan:

Stitch-A-Shape: Bottom-up Learning for B-Rep Generation. 104:1-104:12
Light & Relight
- Chris Careaga, Yagiz Aksoy:

Physically Controllable Relighting of Photographs. 105:1-105:10 - Nadav Magar, Amir Hertz, Eric Tabellion, Yael Pritch, Alex Rav-Acha, Ariel Shamir, Yedid Hoshen:

LightLab: Controlling Light Sources in Images with Diffusion Models. 106:1-106:11 - Mutian Tong, Rundi Wu, Changxi Zheng:

Spatiotemporally Consistent Indoor Lighting Estimation with Diffusion Priors. 107:1-107:11 - Henglei Lv, Bailin Deng, Jianzhu Guo, Xiaoqiang Liu, Pengfei Wan, Di Zhang, Lin Gao:

GSHeadRelight: Fast Relightability for 3D Gaussian Head Synthesis. 108:1-108:12 - Shuai Yang, Jing Tan, Mengchen Zhang, Tong Wu, Gordon Wetzstein, Ziwei Liu, Dahua Lin:

LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation. 109:1-109:10
Numerics & Parallelization
- Navid Ansari, Hans-Peter Seidel, Vahid Babaei:

Accelerated Gamut Discovery via Massive Parallelization. 110:1-110:10 - Chunlei Li, Peng Yu, Tiantian Liu, Siyuan Yu, Yuting Xiao, Shuai Li, Aimin Hao, Yang Gao, Qinping Zhao:

MGPBD: A Multigrid Accelerated Global XPBD Solver. 111:1-111:11
Rigging & Interaction
- Yufan Deng, Yuhao Zhang, Chen Geng, Shangzhe Wu, Jiajun Wu:

Anymate: A Dataset and Baselines for Learning 3D Object Rigging. 112:1-112:10 - Wenning Xu, Shiyu Fan, Paul Henderson, Edmond S. L. Ho:

Multi-Person Interaction Generation from Two-Person Motion Priors. 113:1-113:11 - Ziyi Chang, He Wang, George Alex Koulieris, Hubert P. H. Shum:

Large-Scale Multi-Character Interaction Synthesis. 114:1-114:10 - Runyi Yu, Yinhuai Wang, Qihan Zhao, Hok Wai Tsui, Jingbo Wang, Ping Tan, Qifeng Chen:

SkillMimic-V2: Learning Robust and Generalizable Interaction Skills from Sparse and Noisy Demonstrations. 115:1-115:11
Gaussian Reconstruction
- Minghao Yin, Yukang Cao, Songyou Peng, Kai Han:

Splat4D: Diffusion-Enhanced 4D Gaussian Splatting for Temporally and Spatially Consistent Content Creation. 116:1-116:10 - Zhaoyang Lv, Maurizio Monge, Ka Chen, Yufeng Zhu, Michael Goesele, Jakob J. Engel, Zhao Dong, Richard A. Newcombe:

Photoreal Scene Reconstruction from an Egocentric Device. 117:1-117:11 - Songyin Wu, Zhaoyang Lv, Yufeng Zhu, Duncan P. Frost, Zhengqin Li, Ling-Qi Yan, Carl Yuheng Ren, Richard A. Newcombe, Zhao Dong:

Monocular Online Reconstruction with Enhanced Detail Preservation. 118:1-118:11
Image Representation, Editing, & Generation
- Elad Richardson, Yuval Alaluf, Ali Mahdavi-Amiri, Daniel Cohen-Or:

pOps: Photo-Inspired Diffusion Operators. 119:1-119:12 - Eric Chen, Ziga Kovacic, Madhav Aggarwal, Abe Davis:

Pocket Time-Lapse. 120:1-120:10 - Etai Sella, Yanir Kleiman, Hadar Averbuch-Elor:

InstanceGen: Image Generation with Instance-level Instructions. 121:1-121:10 - Yuxin Zhang, Minyan Luo, Weiming Dong, Xiao Yang, Haibin Huang, Chongyang Ma, Oliver Deussen, Tong-Yee Lee, Changsheng Xu:

IP-Prompter: Training-Free Theme-Specific Image Generation via Dynamic Visual Prompting. 122:1-122:12 - Yuanpeng Tu, Xi Chen, Ser-Nam Lim, Hengshuang Zhao:

DreamMask: Boosting Open-vocabulary Panoptic Segmentation with Synthetic Data. 123:1-123:11 - Sara Dorfman, Dana Cohen-Bar, Rinon Gal, Daniel Cohen-Or:

IP-Composer: Semantic Composition of Visual Concepts. 124:1-124:11
Microstructures & Materials
- Tianyang Xue, Longdu Liu, Lin Lu, Paul Henderson, Pengbin Tang, Haochen Li, Jikai Liu, Haisen Zhao, Hao Peng, Bernd Bickel:

MIND: Microstructure INverse Design with Generative Hybrid Neural Representation. 125:1-125:12 - Maxine Perroni-Scharf, Zachary Ferguson, Thomas Butruille, Carlos M. Portela, Mina Konakovic-Lukovic:

Data-Efficient Discovery of Hyperelastic TPMS Metamaterials with Extreme Energy Dissipation. 126:1-126:12 - Pengbin Tang, Bernhard Thomaszewski, Stelian Coros, Bernd Bickel:

Inverse Design of Discrete Interlocking Materials with Desired Mechanical Behavior. 127:1-127:11 - Aviv Segall, Jing Ren, Martin Schwarz, Olga Sorkine-Hornung:

Computational Modeling of Gothic Microarchitecture. 128:1-128:11 - András Simon, Danwu Chen, Philipp Urban, Vincent Duveiller, Henning Lübbe:

Color Matching and Biomimicry for Multi-Material Dental 3D Printing. 129:1-129:11
Physics-Based Human Characters
- Jungnam Park, Euikyun Jung, Jehee Lee, Jungdam Won:

MAGNET: Muscle Activation Generation Networks for Diverse Human Movement. 130:1-130:11 - Michael Xu, Yi Shi, KangKang Yin, Xue Bin Peng:

PARC: Physics-based Augmentation with Reinforcement Learning for Character Controllers. 131:1-131:11 - Jinseok Bae, Younghwan Lee, Donggeun Lim, Young Min Kim:

PLT: Part-Wise Latent Tokens as Adaptable Motion Priors for Physically Simulated Characters. 132:1-132:10
Beautiful Materials
- Michael Birsak, John Femiani, Biao Zhang, Peter Wonka:

MatCLIP: Light- and Shape-Insensitive Assignment of PBR Material Models. 133:1-133:10 - Liwen Wu, Fujun Luan, Milos Hasan, Ravi Ramamoorthi:

Position-Normal Manifold for Efficient Glint Rendering on High-Resolution Normal Maps. 134:1-134:11 - Laurent Belcour, Alban Fichet, Pascal Barla:

A Fluorescent Material Model for Non-Spectral Editing & Rendering. 135:1-135:9
Filtering Super-Res, & Visual Quality
- Louis Sugy:

A Fast Parallel Median Filtering Algorithm Using Hierarchical Tiling. 136:1-136:8 - Ben Weiss:

Fast Isotropic Median Filtering. 137:1-137:10 - Xiang Zhang, Yang Zhang, Lukas Mehl, Markus Gross, Christopher Schroers:

High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion. 138:1-138:11 - Youngsik Yun, Jeongmin Bae, Hyun Seung Son, Seoha Kim, Hahyun Lee, Gun Bang, Youngjung Uh:

Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting. 139:1-139:9 - Zhe Kong, Le Li, Yong Zhang, Feng Gao, Shaoshu Yang, Tao Wang, Kaihao Zhang, Zhuoliang Kang, Xiaoming Wei, Guanying Chen, Wenhan Luo:

DAM-VSR: Disentanglement of Appearance and Motion for Video Super-Resolution. 140:1-140:11 - Janghyeok Han, Gyujin Sim, Geonung Kim, Hyun-Seung Lee, Kyuha Choi, Youngseok Han, Sunghyun Cho:

DC-VSR: Spatially and Temporally Consistent Video Super-Resolution with Video Diffusion Prior. 141:1-141:11
Lightning Fast Geometry
- Abhishek Madan, Nicholas Sharp, Francis Williams, Ken Museth, David I. W. Levin:

Stochastic Barnes-Hut Approximation for Fast Summation on the GPU. 142:1-142:10
The Shape of You
- Zhengming Yu, Tianye Li, Jingxiang Sun, Omer Shapira, Seonwook Park, Michael Stengel, Matthew A. Chan, Xin Li, Wenping Wang, Koki Nagano, Shalini De Mello:

GAIA: Generative Animatable Interactive Avatars with Expression-conditioned Gaussians. 143:1-143:10 - Gengyan Li, Paulo F. U. Gotardo, Timo Bolkart, Stephan J. Garbin, Kripasindhu Sarkar, Abhimitra Meka, Alexandros Lattas, Thabo Beeler:

TeGA: Texture Space Gaussian Avatars for High-Resolution Dynamic Head Modeling. 144:1-144:12 - Howard Zhang, Yuval Alaluf, Sizhuo Ma, Achuta Kadambi, Jian Wang, Kfir Aberman:

InstantRestore: Single-Step Personalized Face Restoration with Shared-Image Attention. 145:1-145:10 - Shaofei Wang, Tomas Simon, Igor Santesteban, Timur M. Bagautdinov, Junxuan Li, Vasu Agrawal, Fabian Prada, Shoou-I Yu, Pace Nalbone, Matt Gramlich, Roman Lubachersky, Chenglei Wu, Javier Romero, Jason M. Saragih, Michael Zollhöfer, Andreas Geiger, Siyu Tang, Shunsuke Saito:

Relightable Full-Body Gaussian Codec Avatars. 146:1-146:12 - Hendrik Junkawitsch, Guoxing Sun, Heming Zhu, Christian Theobalt, Marc Habermann:

EVA: Expressive Virtual Avatars from Multi-view Videos. 147:1-147:11
Loco/Motion Capture
- Siyuan Shen, Tianjia Shao, Kun Zhou, Chenfanfu Jiang, Sheldon Andrews, Victor B. Zordan, Yin Yang:

Elastic Locomotion with Mixed Second-order Differentiation. 148:1-148:11 - Zhiyuan Yu, Zhe Li, Hujun Bao, Can Yang, Xiaowei Zhou:

HumanRAM: Feed-forward Human Reconstruction and Animation Model using Transformers. 149:1-149:13
Meshes: Extract & Repair
- Huibiao Wen, Guilong He, Rui Xu, Shuangmin Chen, Shiqing Xin, Zhenyu Shu, Taku Komura, Jieqing Feng, Wenping Wang, Changhe Tu:

Feature-Preserving Mesh Repair via Restricted Power Diagram. 150:1-150:11
Very Vivid Videos
- Yuanpeng Tu, Hao Luo, Xi Chen, Sihui Ji, Xiang Bai, Hengshuang Zhao:

VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control. 151:1-151:11 - Feng-Lin Liu, Shi-Yang Li, Yan-Pei Cao, Hongbo Fu, Lin Gao:

Sketch3DVE: Sketch-based 3D-Aware Scene Video Editing. 152:1-152:12 - Yuxuan Bian, Zhaoyang Zhang, Xuan Ju, Mingdeng Cao, Liangbin Xie, Ying Shan, Qiang Xu:

VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context Control. 153:1-153:12 - Hongbo Zhao, Jiaxing Li, Peiyi Zhang, Peng Xiao, Jianxin Lin, Yijun Wang:

ColorSurge: Bringing Vibrancy and Efficiency to Automatic Video Colorization via Dual-Branch Fusion. 154:1-154:11 - Karlis Martins Briedis, Abdelaziz Djelouah, Raphaël Ortiz, Markus Gross, Christopher Schroers:

Controllable Tracking-Based Video Frame Interpolation. 155:1-155:11 - Manuel Kansy, Jacek Naruniec, Christopher Schroers, Markus Gross, Romann M. Weber:

Reenact Anything: Semantic Video Motion Transfer Using Motion-Textual Inversion. 156:1-156:12
Geometry in Action
- Bosheng Li, Nikolas Alexander Schwarz, Wojtek Palubicki, Sören Pirk, Dominik L. Michels, Bedrich Benes:

Stressful Tree Modeling: Breaking Branches with Strands. 157:1-157:11 - Rahul Mitra, Mattéo Couplet, Tongtong Wang, Megan Hoffman, Kui Wu, Edward Chien:

Curl Quantization for Automatic Placement of Knit Singularities. 158:1-158:10 - Fanchao Zhong, Yang Wang, Peng-Shuai Wang, Lin Lu, Haisen Zhao:

DeepMill: Neural Accessibility Learning for Subtractive Manufacturing. 159:1-159:11 - Weizheng Zhang, Hao Pan, Lin Lu, Xiaowei Duan, Xin Yan, Ruonan Wang, Qiang Du:

DualMS: Implicit Dual-Channel Minimal Surface Optimization for Heat Exchanger Design. 160:1-160:10
Neural Materials & LOD
- Zilin Xu, Xiang Chen, Chen Liu, Beibei Wang, Lu Wang, Zahra Montazeri, Ling-Qi Yan:

Towards Comprehensive Neural Materials: Dynamic Structure-Preserving Synthesis with Accurate Silhouette at Instant Inference Speed. 161:1-161:11 - Nithin Raghavan, Krishna Mullia, Alexander Trevithick, Fujun Luan, Milos Hasan, Ravi Ramamoorthi:

Generative Neural Materials. 162:1-162:11 - Liwen Wu, Sai Bi, Zexiang Xu, Hao Tan, Kai Zhang, Fujun Luan, Haolin Lu, Ravi Ramamoorthi:

Neural BRDF Importance Sampling by Reparameterization. 163:1-163:11 - Saeed Hadadan, Benedikt Bitterli, Tizian Zeltner, Jan Novák, Fabrice Rousselle, Jacob Munkberg, Jon Hasselgren, Bartlomiej Wronski, Matthias Zwicker:

Generative detail enhancement for physically based materials. 164:1-164:11 - Nuri Ryu, Jiyun Won, Jooeun Son, Minsu Gong, Joo-Haeng Lee, Sunghyun Cho:

Elevating 3D Models: High-Quality Texture and Geometry Refinement from a Low-Quality Model. 165:1-165:12 - Crane He Chen, Vladimir G. Kim:

Escher Tile Deformation via Closed-Form Solution. 166:1-166:11
More Than it Looks Like
- Kechun Wang, Renjie Chen:

PaRas: A Rasterizer for Large-Scale Parametric Surfaces. 167:1-167:9 - Pengfei Zhu, Jie Guo, Yifan Liu, Qi Sun, Yanxiang Wang, Keheng Xu, Ligang Liu, Yanwen Guo:

Appearance-aware Multi-view SVBRDF Reconstruction via Deep Reinforcement Learning. 168:1-168:11 - Ruben Wiersma, Julien Philip, Milos Hasan, Krishna Mullia, Fujun Luan, Elmar Eisemann, Valentin Deschaintre:

Uncertainty for SVBRDF Acquisition using Frequency Analysis. 169:1-169:12 - Rachel McDonnell, Bharat Vyas, Uros Sikimic, Pisut Wisessing:

Feeling Blue or Seeing Red? Investigating the effect of light color, shadow and realism on the perception of emotion of real and virtual humans. 170:1-170:11
Packing in Structure
- Jingwen Ye, Yuze He, Yanning Zhou, Yiqin Zhu, Kaiwen Xiao, Yong-Jin Liu, Wei Yang, Xiao Han:

PrimitiveAnything: Human-Crafted 3D Primitive Assembly Generation with Auto-Regressive transformer. 171:1-171:12 - Zhenyu Wang, Min Lu:

Image-Space Collage and Packing with Differentiable Rendering. 172:1-172:11 - Junming Huang, Chi Wang, Letian Li, Changxin Huang, Qiang Dai, Weiwei Xu:

BuildingBlock: A Hybrid Approach for Structured Building Generation. 173:1-173:11 - Davide Sforza, Marzia Riso, Filippo Muzzini, Nicola Capodieci, Fabio Pellacini:

Interactive Optimization of Scaffolded Procedural Patterns. 174:1-174:11

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














