default search action
18th CAD/Graphics 2023: Shanghai, China
- Shi-Min Hu, Yiyu Cai, Paul L. Rosin:
Computer-Aided Design and Computer Graphics - 18th International Conference, CAD/Graphics 2023, Shanghai, China, August 19-21, 2023, Proceedings. Lecture Notes in Computer Science 14250, Springer 2024, ISBN 978-981-99-9665-0 - Junqi Diao, Haiyong Jiang, Feilong Yan, Yong Zhang, Jinhui Luan, Jun Xiao:
Unsupervised 3D Articulated Object Correspondences with Part Approximation and Shape Refinement. 1-15 - Cuihong Yu, Cheng Han, Qi Zhang, Chao Zhang:
MsF-HigherHRNet: Multi-scale Feature Fusion for Human Pose Estimation in Crowded Scenes. 16-29 - Jiakang Deng, De Xing, Cheng Chen, Yongguo Han, Jianqiang Chen:
FFANet: Dual Attention-Based Flow Field Aware Network for 3D Grid Classification and Segmentation. 30-44 - Shuo-Peng Chen, Hong-Yu Ma, Li-Yong Shen, Chun-Ming Yuan:
A Lightweight Model for Feature Points Recognition of Tool Path Based on Deep Learning. 45-59 - Bin Fang, Ran Yi, Lizhuang Ma:
Image Fusion Based on Feature Decoupling and Proportion Preserving. 60-74 - Xiang Wang, Lin Li, Liang He, Xiaoping Liu:
An Irregularly Shaped Plane Layout Generation Method with Boundary Constraints. 75-89 - Riccardo Tonello, Md. Tusher Mollah, Kenneth Weiss, Jon Spangenberg, Are Strandlie, David Bue Pedersen, Jeppe Revall Frisvad:
Influence of the Printing Direction on the Surface Appearance in Multi-material Fused Filament Fabrication. 90-107 - Yanhui Sun, Zhangjin Huang:
StrongOC-SORT: Make Observation-Centric SORT More Robust. 108-122 - Wufan Wang, Lei Zhang:
Semi-direct Sparse Odometry with Robust and Accurate Pose Estimation for Dynamic Scenes. 123-137 - Chuxia Yang, Wanshu Fan, Ziqi Wei, Xin Yang, Qiang Zhang, Dongsheng Zhou:
Parallel Dense Vision Transformer and Augmentation Network for Occluded Person Re-identification. 138-153 - Liqing Gao, Peidong Liu, Liang Wan, Wei Feng:
Spatial-Temporal Consistency Constraints for Chinese Sign Language Synthesis. 154-169 - Minjing Yu, Ting Liu, Jeffrey Too Chuan Tan, Yong-Jin Liu:
An Easy-to-Build Modular Robot Implementation of Chain-Based Physical Transformation for STEM Education. 170-185 - Yanqiu Li, Yanan Liu, Hao Zhang, Shouzheng Sun, Dan Xu:
Skeleton-Based Human Action Recognition via Multi-Knowledge Flow Embedding Hierarchically Decomposed Graph Convolutional Network. 186-199 - Yu He, Yi-Han Jin, Ying-Tian Liu, Baoli Lu, Ge Yu:
Color-Correlated Texture Synthesis for Hybrid Indoor Scenes. 200-214 - Runze Yang, Shi Chen, Gang Xu, Shanshan Gao, Yuanfeng Zhou:
Metaballs-Based Real-Time Elastic Object Simulation via Projective Dynamics. 215-234 - Chenbin Li, Yu Xin, Gaoyi Liu, Xiang Zeng, Ligang Liu:
NeRF Synthesis with Shading Guidance. 235-249 - Fuli Wu, Lijie Chen, Bin Feng, Pengyi Hao:
Multi-scale Hybrid Transformer Network with Grouped Convolutional Embedding for Automatic Cephalometric Landmark Detection. 250-265 - Hao Yang, Haijia Sun, Qianyu Zhou, Ran Yi, Lizhuang Ma:
ZDL: Zero-Shot Degradation Factor Learning for Robust and Efficient Image Enhancement. 266-280 - Shengjin Ma, Wang Yuan, Yiting Wang, Xin Tan, Zhizhong Zhang, Lizhuang Ma:
Self-supervised Contrastive Feature Refinement for Few-Shot Class-Incremental Learning. 281-294 - Huzhiyuan Long, Yufan Yang, Chang Liu, Jinyuan Jia:
SRSSIS: Super-Resolution Screen Space Irradiance Sampling for Lightweight Collaborative Web3D Rendering Architecture. 295-313 - Xu-Qiang Hu, Yu-Ping Wang:
QuadSampling: A Novel Sampling Method for Remote Implicit Neural 3D Reconstruction Based on Quad-Tree. 314-328 - Ziqing Li, Yang Li, Shaohui Lin:
RAGT: Learning Robust Features for Occluded Human Pose and Shape Estimation with Attention-Guided Transformer. 329-347 - Linlian Jiang, Pan Chen, Ye Wang, Tieru Wu, Rui Ma:
P2M2-Net: Part-Aware Prompt-Guided Multimodal Point Cloud Completion. 348-365
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.