


default search action
SIGGRAPH Asia 2018 Posters: Tokyo, Japan
- Nafees Bin Zafar, Kun Zhou:

SIGGRAPH Asia 2018 Posters, Tokyo, Japan, December 04-07, 2018. ACM 2018, ISBN 978-1-4503-6063-0
Animation and visual effects
- Hanyoung Jang, Byungjun Kwon, Moonwon Yu, Seong Uk Kim, Jongmin Kim:

A variational U-Net for motion retargeting. 1:1-1:2 - Tomoki Sueyoshi, Yuki Morimoto:

Automatic generation of interactive projection mapping for leaves. 2:1-2:2 - Marc Salvati, Kota Ito:

Blur algorithms for cartoon animation. 3:1-3:2 - Takahiro Kawabe:

Danswing papers. 4:1-4:2 - Kei Kitahata, Yuji Sakamoto:

Evaluation of reducing three-dimensionality of movement to create 3DCG animation looks more like 2D animation. 5:1-5:2 - Keisuke Yamakawa, Suguru Saito:

Generating anime-like face images from projected 3D models. 6:1-6:2 - Naofumi Akimoto, Hirokatsu Kataoka, Yoshimitsu Aoki:

Generating effect animation with conditional GANs. 7:1-7:2 - Rina Savista Halim, Phillip Pan, Kuo-Wei Chen, Chih-Yuan Yao, Tong-Yee Lee:

Non-photorealistic rendering of yangzhou school painting for koi animation. 8:1-8:2 - Koh Sueda, Takashi Kitada, Yushin Suzuki, Taiki Wada:

Research and development of augmented FPV drone racing system. 9:1-9:2 - Hiroki Watanabe, Makoto Fujisawa, Masahiko Mikawa:

Simulation of bubbles with floating and rupturing effect for SPH. 10:1-10:2 - Takahiro Kawabe:

Spatially augmented depth and transparency in paper materials. 11:1-11:2 - Wataru Yamamoto, Bisser Raytchev, Toru Tamaki

, Kazufumi Kaneda:
Spectral rendering of fluorescence using importance sampling. 12:1-12:2 - Maël Crespin-Pommier, Baptiste Olivier, Antoine Demière, Antonin Leuret, Alain Lioret:

Transitory project: an interactive artistic digital installation based on an artificial intelligence. 13:1-13:2
Computer graphic education
- Rebecca Yuqi Huang, Kevin Kaiting Kao, Kosuke Kumada:

An innovative model for animation education in Asia: massive collaborative animation projects. 14:1-14:2
Computer vision and image understanding
- Koki Tsubota, Toru Ogawa, Toshihiko Yamasaki, Kiyoharu Aizawa:

Adaptation of manga face representation for accurate clustering. 15:1-15:2 - Kalenga-Bimpambu Tshilombo, Yusuke Yoshiyasu, Antonio Gabas, Kota Suzui:

Automatic dataset generation for object pose estimation. 16:1-16:2 - Masaru Tsuchida, Takahito Kawanishi, Kunio Kashino:

Color enhancement factors to control spectral power distribution of illumination. 17:1-17:2 - Sheng-Fu Ko, Yi-Hung Lin, Ping-Hsuan Han

, Chia-Chun Chang, Chien-Hsing Chou:
Combining deep learning algorithm with scene recognition and haptic feedback for 4D-VR cinema. 18:1-18:2 - Jun-Hyuk Kim, Jun-Ho Choi, Choong-Hyun Seo, Jaehyuk Chang, Jong-Seok Lee:

Deep learning-based super-resolution for digital comics. 19:1-19:2 - Ojaswi Gupta, Ramya Hebbalaguppe:

FingertipCubes: an inexpensive D.I.Y wearable for 6-DoF per fingertip pose estimation using a single RGB camera. 20:1-20:2 - Gahye Lee, Seungkyu Lee:

GAN with autoencoder and importance sampling. 21:1-21:2 - Shih-Hsiu Chang, Ching-Ya Chiu, Chia-Sheng Chang, Kuo-Wei Chen, Chih-Yuan Yao, Ruen-Rone Lee, Hung-Kuo Chu

:
Generating 360 outdoor panorama dataset with reliable sun position estimation. 22:1-22:2 - Gahye Lee, Seungkyu Lee:

Hunting out graphic images from real images using recurrent neural network and extended principal color components. 23:1-23:2 - Suekyeong Nam, Seungkyu Lee:

Motion regeneration using motion texture and autoencoder. 24:1-24:2 - Jashanjot Singh, Haotao Lai, Konstantinos Psimoulis, Paul Palmieri, Inna Atanasova, Yasmine Chiter, Amirali Shirkhodaiekashani, Serguei A. Mokhov

:
OpenISS depth camera as a near-realtime broadcast service for performing arts and beyond. 25:1-25:2 - Yoshibumi Fukuda, Kazuya Sugimoto:

Study on background optical flow cancellation using cyclically updated camera posture information. 26:1-26:2
Computer-aided design
- Yasuo Kawai, Natsumi Kobayashi, Ayaka Enzaka:

Historical streetscape simulation system that reflects changes in weather, time, and seasons. 27:1-27:2 - Yasuo Kawai, Yurie Kaizu, Shusei Yoshida:

Visualization system for tsunami evacuation behavior. 28:1-28:2
Geometry and modeling
- Shih-Hao Liu, Tung-Ju Hsieh:

Chinese sea punica granatum floral pattern synthesis. 29:1-29:2 - Lifang Wu, Yisong Gao, Miao Yu, Meng Jian, Zechao Liu, Yupeng Guan:

Global-optimization-based model decomposition for support-free multi-DOF 3D printing. 30:1-30:2 - Charles C. Morace, Feng-Wei Wu, Chih-Kuo Yeh, Chia-Hsiang Chen, I-Cheng Yeh

, Tong-Yee Lee:
Hair modeling from a single anime-style image. 31:1-31:2 - Yuki Igarashi

:
Interactive modeling for craft band design. 32:1-32:2 - Shingo Uzawa, Toshiharu Igarashi, Kazuki Takazawa, Nozomi Magome, Yoichi Ochiai:

Novel structure using quasirigid folding of voxel in Ron Resch pattern. 33:1-33:2 - Shang-Ta Yang, Chi-Han Peng, Peter Wonka, Hung-Kuo Chu

:
PanoAnnotator: a semi-automatic tool for indoor panorama layout annotation. 34:1-34:2 - Hyunsoo Song, Seungkyu Lee:

PPConv: polypod convolution for 3D point cloud description. 35:1-35:2 - Wataru Ono, Hikaru Shionozaki, Takashi Ijiri, Kenji Kohiyama, Hiroya Tanaka:

Shape and texture reconstruction for insects by using X-ray CT and focus stack imaging. 36:1-36:2 - Seungpyo Ha, Seungkyu Lee:

Translucent surface detection by raycasting through multiple depth images. 37:1-37:2 - Yu-Lin Chao, Tung-Ju Hsieh, Pei-Ying Chiang:

Vector graphics editing with interweaving and penetrating. 38:1-38:2
Human-computer interaction
- Takatoshi Yoshida, Xiaoyan Shen, Tal Achituv, Hiroshi Ishii:

3D touch point detection on load sensitive surface based on continuous fluctuation of a user hand. 39:1-39:2 - Shumpei Akahoshi, Mitsunori Matsushita:

Afterglow projection: virtual information projection method to real environment using pico projector in mixed reality. 40:1-40:2 - Yuto Sugita, Keiichi Zempo, Yasumasa Ando

, Yuya Kakutani, Koichi Mizutani
, Naoto Wakatsuki:
Diet gamification toward chewing amount control via head mounted display. 41:1-41:2 - Ning Xie, Xinrui Cai, Sipei Li, Yifan Lu, Mingyue Lou, Heng Tao Shen:

DT-Zheng: digital twin method for Zheng musical instrument. 42:1-42:2 - Dong-Hyun Kim, Yong-Guk Go, Soo-Mi Choi:

First-person-view drone flying in mixed reality. 43:1-43:2 - Michael Vallance, Yuto Kurashige, Takurou Magaki:

Fukushima nuclear plant as a synthetic learning environment. 44:1-44:2 - Mamoru Hirota, Ayumu Tsuboi, Masayuki Yokoyama, Masao Yanagisawa:

Gesture recognition of air-tapping and its application to character input in VR space. 45:1-45:2 - Lingyan Ruan

, Bin Chen
, Miu-Ling Lam:
Human-computer interaction by voluntary vergence control. 46:1-46:2 - Mengwei Lin, Junfeng Yao, Yingying She, Chao Gao, Jin Chen:

Human-marionette interaction puppetry using mechanical arm and L-shaped screen. 47:1-47:2 - Mayumi Takasaki, Kyoko Ohashi, Shinji Mizuno:

Interaction of a stereoscopic 3DCG image with motion parallax displayed in mid-air. 48:1-48:2 - Negar Kaghazchi, Sachiko Kodama, Masakatsu Kaneko:

Interactive visual narrative "cloudy lady": gaze navigation method and a prototype application. 49:1-49:2 - Keiichiro Taniguchi, Tomoko Hashida:

Rapid prototyping system using transformable and adherable PCL blocks. 50:1-50:2 - Anthony Bazelle, Hugo Pourrier-Nunez, Maxime Rignault, Michael Chang:

Simulation of different materials texture in virtual reality through haptic gloves. 51:1-51:2 - Takeo Hamada

, Michiteru Kitazaki
, Noboru Koshizuka:
Social facilitation with virtual jogging companion on smartglasses. 52:1-52:2 - Toshikazu Ohshima, Tsukasa Sumizono:

Tactile microcosm of ALife: interaction with artificial life by aerial mixed reality display. 53:1-53:2 - Minjing Yu, Yong-Jin Liu, Guozhen Zhao, Charlie C. L. Wang

:
Tangible interaction with 3D printed modular robots through multi-channel sensors. 54:1-54:2 - Akira Nakayasu:

Tentacle flora: lifelike robotic sculpture. 55:1-55:2
Image and video processing applications
- Ming-Te Chi, Hao-Hsuan Tang, Chih-Kuo Yeh, Charles C. Morace, Hui-Nieg Chou, Shih-Syun Lin, Tong-Yee Lee:

Alphabet collage art generation. 56:1-56:2 - Yoon-Seok Choi, Soonchul Jung

, In-Su Jang, Taewon Choi, Jin-Seo Kim:
Automatic perforation system for korean traditional painting: Dancheong. 57:1-57:2 - Shintaro Takemura:

Optimize deep super-resolution and denoising for compressed textures. 58:1-58:2 - Yudai Niwa, Hajime Kajita, Naoya Koizumi, Takeshi Naemura

:
GoThro: optical transfer of camera viewpoint using retro-transmissive optical system. 59:1-59:2 - Sanghyuk Kim, Min-Woo Seo, Seung Joon Lee, Suk-Ju Kang:

Object tracking-based foveated super-resolution convolutional neural network for head mounted display. 60:1-60:2 - Tomomi Takashina

, Yuji Kokumai:
Real-virtual bridge: a modular mechanism to mediate between real and virtual objects. 61:1-61:2 - Kalina Borkiewicz

, A. J. Christensen, Stuart Levy, Robert Patterson, Donna J. Cox, Jeff Carpenter:
Scientific and visual effects software integration for the visualization of a chromatophore. 62:1-62:2 - Natsuki Kagaya, Hiroshi Mori, Tomoharu Ishikawa, Kazuya Sasaki, Miyoshi Ayama

, Kenji Shoji, Fubito Toyama:
Simulating kimono fabrication based on the production process of Yuki-tsumugi. 63:1-63:2 - Mahak Gambhir, Swati Panda, Shaik Jani Basha:

Vulkan rendering framework for mobile multimedia. 64:1-64:2
Information visualization and scientific visualization
- Jian Zhao, Francine Chen, Patrick Chiu:

A generic visualization framework for understanding missing links in bipartite networks. 65:1-65:2 - Yaodong Li, Zhuo Yang, Yinwei Zhan, Yongqiang Li, Gang He:

Directional heat map generation with saccade information. 66:1-66:2 - Hirofumi Seo, Takeo Igarashi:

Enhancement techniques for human anatomy visualization. 67:1-67:2 - Alex Zhang, Kan Chen, Henry Johan, Marius Erdt:

High performance city rendering in Vulkan. 68:1-68:2 - Christopher J. Hammang, Phillip Gough

, Weber Liu
, Eric Jiang, Pauline Ross
, Jim Cook
, Philip Poronnik:
Life sciences in virtual reality: first-year students learning as creators. 69:1-69:2 - Sophie Ramassamy, Hiroyuki Kubo

, Takuya Funatomi
, Daichi Ishii, Akinobu Maejima, Satoshi Nakamura, Yasuhiro Mukaigawa:
Pre- and post-processes for automatic colorization using a fully convolutional network. 70:1-70:2 - Yuta Kataoka, Wataru Teraoka, Yasuhiro Oikawa, Yusuke Ikeda

:
Real-time measurement and display system of 3D sound intensity map using optical see-through head mounted display. 71:1-71:2 - Mikhail Sorokin, Galen Stetsyuk, Raghav Gupta, Alex Busch, Brian Russin, Celeste Lyn Paul, Samir Khuller:

Ring graphs in VR: exploring a new and novel method for node placement and link visibility in VR-based graph analysis. 72:1-72:2
Multimedia applications
- Naoaki Kataoka, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh:

Digital reproduction of hair waving based on animator technique. 73:1-73:2
Video gaming
- Ryohei Oda, Yuto Mizumatsu, Tomoki Kajinami:

An interface for post-match play-by-play analysis of a fighting game based on the two players' eye movements. 74:1-74:2
Virtual reality
- Kengo Fujii

, Kazuki Shimose, Clément Trovato, Masao Nakajima, Toru Iwane, Masaki Yasugi
, Hirotsugu Yamamoto:
A device for reconstructing light field data as 3D aerial image by retro-reflection. 75:1-75:2 - Kenji Furukawa, Susumu Nakata

:
Automatic generation of hair motion of 3D characters following japanese anime style. 76:1-76:2 - Yihua Bao, Dong Li, Dongdong Weng, Mo Su:

dongSpace: a wide-area mixed reality multiplayer game system. 77:1-77:2 - Tze-How Liew, Yueh-Chun Lai, Hong Shiang Lin, Sun-Yu Gordon Chi, Ming Ouhyoung

:
Free-viewpoint synthesis over panoramic images. 78:1-78:2 - Kei Tsuchiya, Ayaka Sano, Naoya Koizumi:

Interaction system with mid-air CG character that has own eyes. 79:1-79:2 - Po-Yao Huang, Hong Shiang Lin, Sun-Yu Gordon Chi, Liang-Han Lin, Ming Ouhyoung

:
Panoramic depth reconstruction within a single shot by optimizing global sphere radii. 80:1-80:2 - Bruno Evangelista, Houman Meshkin, Helen Kim, Anaelisa Aburto, Ben Max Rubinstein, Andrea Ho:

Realistic AR makeup over diverse skin tones on mobile. 81:1-81:2 - Yuta Itoh, Kenta Yamamoto, Yoichi Ochiai:

Retinal HDR: HDR image projection method onto retina. 82:1-82:2 - George Papagiannakis, Nikos Lydatakis, Steve Kateros, Stelios Georgiou, Paul Zikas:

Transforming medical education and training with VR using M.A.G.E.S. 83:1-83:2

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














