


default search action
44th SIGGRAPH 2017: Los Angeles, CA, USA - Talks
- Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2017, Los Angeles, CA, USA, July 30 - August 3, 2017, Talks. ACM 2017, ISBN 978-1-4503-5008-2

The art of production
- Peter Tieryas, Henry Garcia, Stacey Truman, Evan Bonifacio:

Bringing Lou to life: a study in creating Lou. 1:1-1:2 - George Nguyen, Peter Tieryas, Jae Hyung Kim, Josh Holtsclaw:

Revving up a storm: a talk on creating Jackson storm. 2:1-2:2 - Kim Keech, Rachel Bibb, Brian Whited, Brett Achorn:

The role of hand-drawn animation in Disney's Moana. 3:1-3:2 - Tomohiro Hasegawa:

A fantasy based on reality the art of Final Fantasy XV. 4:1-4:2
It's complicated
- Jens Jebens, Damien Gray, Simon Bull, Aidan Sarsfield:

Evolving complexity management on "the LEGO Batman movie". 5:1-5:2 - Hannes Ricklefs, Stefan Puschendorf, Sandilya Bhamidipati, Brian Eriksson, Akshay Pushparaja:

From VFX project management to predictive forecasting. 6:1-6:2 - Dhruv Govil:

Animation collaboration with depth compositing. 7:1-7:2 - Daniel Heckenberg, Luke Emrose, Matthew Reid, Michael Balzer, Antoine Roille, Max Liani:

Rendering the darkness: glimpse on the LEGO Batman movie. 8:1-8:2
Effects omelette
- Ciaran Moloney, Jamie Haydock, Mathew Puchala, Miguel Perez Senent:

Rogue One: A Star Wars Story - Jedha destruction. 9:1-9:2 - Marc Bryant, Ian J. Coony, Jonathan Garcia:

Moana: Foundation of a Lava Monster. 10:1-10:2 - Dong Joo Byun, Shant Ergenian, Gregory Culp:

Moana: geometry based disco ball lighting for tamatoa's lair. 11:1-11:2 - Matt Ebb, Richard Sutherland, Daniel Heckenberg, Miles Green:

Building detailed fractal sets for "Guardians of the Galaxy Vol. 2". 12:1-12:2
Game on
- Jim Malmros:

Gears of War 4: custom high-end graphics features and performance techniques. 13:1-13:2 - Colin Matisz, Andy Yi Shen:

HDR TV output and lighting Gears of War 4. 14:1-14:2 - Prasert Prasertvithyakarn, Tatsuhiro Joudan, Hidekazu Kato, Seiji Nanase, Masayoshi Miyamoto, Isamu Hasegawa:

Procedural photograph generation from actual gameplay: snapshot AI in FINAL FANTASY XV. 15:1-15:2 - Kleber Garcia:

Circular separable convolution depth of field. 16:1-16:2
Catching light
- Daniel Heckenberg, Steve Agland

, Jean-Pascal leBlanc, Raphael Barth:
Automated light probes from capture to render for Peter Rabbit. 17:1-17:2 - Lucio Moser, Darren Hendler, Doug Roble:

Masquerade: fine-scale details for head-mounted camera motion capture data. 18:1-18:2 - Andrew W. Feng, Evan A. Suma, Ari Shapiro:

Just-in-time, viable, 3d avatars from scans. 19:1-19:2 - Adrien Kaiser

, José Alonso Ybáñez Zepeda, Tamy Boubekeur:
Proxy clouds for RGB-D stream processing: an insight. 20:1-20:2
I like to move it, move it
- Matthew Cong, Lana Lan, Ronald Fedkiw:

Muscle simulation for facial animation in Kong: Skull Island. 21:1-21:2 - David Bollo:

High performance animation in Gears of War 4. 22:1-22:2 - Gene S. Lee, Christian Eisenacher, Andy Lin

, Noel Villegas:
Handling scene constraints for pose-based caching. 23:1-23:2 - Pilar Molina Lopez, Jake Richards:

The eyes have it: comprehensive eye control for animated characters. 24:1-24:2
Hair it is!
- Marc Thyng, Christopher Evart, Toby Jones, Aleka McAdams:

The art and technology of hair simulation in Disney's Moana. 25:1-25:2 - Brian Missey, Amaury Aubel, Arunachalam Somasundaram, Megha Davalath:

Hairy effects in Trolls. 26:1-26:2 - Chloe LeGendre, Loc Huynh, Shanhe Wang, Paul E. Debevec:

Modeling vellus facial hair from asperity scattering silhouettes. 27:1-27:2
The art of visual journeys
- Elias Saliba, Mustafa Barkaoui, Hind Wakil:

Behind the scenes of VFX in the Middle East & Syria: "in art we trust". 28:1-28:2
Wet and wild
- Sean Palmer

, Jonathan Garcia, Sara Drakeley, Patrick Kelly, Ralf Habel:
The ocean and water pipeline of Disney's Moana. 29:1-29:2 - Ben Frost, Alexey Stomakhin

, Hiroaki Narita:
Moana: performing water. 30:1-30:2 - Rob Hopper, Kai Wolter:

The water effects of Pirates of the Caribbean: Dead Men Tell no Tales. 31:1-31:2 - Stephen Marshall, Tim Speltz, Greg Gladstone, Krzysztof Rost, Jon Reisch:

Racing to the finish line: effects challenges on Cars 3. 32:1-32:2
Lite brite
- Alejandro Conty Estevez, Christopher D. Kulla:

Importance sampling of many lights with adaptive tree splitting. 33:1-33:2 - Alexander Keller, Carsten Wächter, Matthias Raab, Daniel Seibert, Dietger van Antwerpen, Johann Korndörfer, Lutz Kettner:

The iray light transport simulation and rendering system. 34:1-34:2 - Beibei Wang, Nicolas Holzschuch:

Precomputed multiple scattering for light simulation in participating medium. 35:1-35:2 - Norbert Bus, Tamy Boubekeur:

Double hierarchies for efficient sampling in Monte Carlo rendering. 36:1-36:2
It's a material world
- Colin Penty, Ian Wong:

Gears of War 4: creating a layered material system for 60fps. 37:1-37:2 - Priyamvad Deshmukh, Feng Xie, Eric Tabellion:

DreamWorks fabric shading model: from artist friendly to physically plausible. 38:1-38:2 - Lutz Kettner:

Fast automatic level of detail for physically-based materials. 39:1-39:2 - Yuxiao Du, Ergun Akleman

:
Designing look-and-feel using generalized crosshatching. 40:1-40:2
Making waves
- Dong Joo Byun, Alexey Stomakhin

:
Moana: crashing waves. 41:1-41:2 - Gergely Klár

, Jeff Budsberg, Matt Titus, Stephen Jones, Ken Museth:
Production ready MPM simulations. 42:1-42:2 - Todd Keeler

, Robert Bridson:
Compact iso-surface representation and compression for fluid phenomena. 43:1-43:2 - Michael Bang Nielsen, Konstantinos Stamatelos, Adrian Graham, Marcus Nordenstam, Robert Bridson:

Localized guided liquid simulations in bifrost. 44:1-44:2
VR/AR to go
- Chris Kramer:

Evolution of AR in Pokémon go. 44a:1 - Kent Bye:

How VR changes the sense of ourselves & reality. 44b:1 - Graham Roberts:

A new (virtual) reality at the New York Times. 44c:1
Alt. workflows
- Mike Jutan, Steve Ellis:

Director-centric virtual camera production tools for rogue one. 45:1-45:2 - Vincent Serritella, David Lally, Brian Larsen, Farhez Rayani, Jason Kim, Matt Silas:

Smash and grab: off the rails filmmaking at Pixar. 46:1-46:2 - Jeff Stringer, Owen Nelson, Tony Aiello:

LAIKA's digital big boards. 47:1-47:2 - Kenneth Vanhoey

, Carlos Eduardo Porto de Oliveira, Hayko Riemenschneider, András Bódis-Szomorú, Santiago Manén, Danda Pani Paudel
, Michael Gygli, Nikolay Kobyshev, Till Kroeger, Dengxin Dai, Luc Van Gool:
VarCity - the video: the struggles and triumphs of leveraging fundamental research results in a graphics video production. 48:1-48:2
Make me a design
- Andrzej Zarzycki

, Martina Decker:
Programmable buildings: architecture as an interaction interface powered with programmable matter. 49:1-49:2 - Jochen Suessmuth, Sky Asay, Conor Fitzgerald, Mario Poerner, Davoud Ohadi, Detlef Mueller:

Concept through creation: establishing a 3-D design process in the footwear industry. 50:1-50:2 - Don Derek Haddad, Gershon Dublon, Brian D. Mayton, Spencer Russell, Xiao Xiao, Ken Perlin, Joseph A. Paradiso:

Resynthesizing reality: driving vivid virtual environments from sensor networks. 51:1-51:2
Procedural with caution
- Daniela Hasenbring

, Jeremy Hoey:
Interactive environment creation with sprout. 52:1-52:2 - James Bartolozzi, Matt Kuruc:

A hybrid approach to procedural tree skeletonization. 53:1-53:2 - Wanho Choi, Nayoung Kim, Julie Jang, Sanghun Kim, Dohyun Yang:

Build your own procedural grooming pipeline. 54:1-54:2 - Arunachalam Somasundaram:

FurCollide: fast, robust, and controllable fur collisions with meshes. 55:1-55:2
It's alive! alternative immersions
- Andy Rowan-Robinson:

Field trip to Mars. 56:1 - Oculus:

Dear angelica: breathing life into VR illustrations. 57:1-57:2 - Dave Mauriello, Jason Kirk, Jeremy Fernsler:

Two novel approaches to visualizing internal and external anatomy of the cardiac cycle with a windowed virtual heart model. 58:1-58:2
Pipe dreams
- Matthew Chambers, Justin Israel, Andy Wright:

Large scale VFX pipelines. 59:1-59:2 - Daniel Bergel, Craig Dibble, Pauline Koh, James Pearson, Hannes Ricklefs:

Cloudy with a chance of rendering. 60:1-60:2 - Mungo Pay, Damien Maupu, Martin Prazák:

Flexible pipeline for crowd production. 61:1-61:2 - Ton Roosendaal, Francesco Siddi:

Beyond "cosmos laundromat": blender's open source studio pipeline. 62:1-62:2
Realities of VR production
- Dominik P. Käser, Evan Parker, Adam Glazier, Mike Podwal, Matt Seegmiller, Chun-Po Wang, Per Karlsson, Nadav Ashkenazi, Joanna Kim, Andre Le, Matthias Bühlmann, Joshua Moshier:

The making of Google earth VR. 63:1-63:2 - Bruna Berford, Carlos Diaz-Padron, Terry Kaleas, Irem Oz, Devon Penney:

Building an animation pipeline for VR stories. 64:1-64:2 - Chris Healer:

Visual effects for VR. 65:1-65:2
Partly crowdy
- Brett Achorn, Sean Palmer

, Larry Wu:
Building moana's kakamora barge. 66:1-66:2 - Melt van der Spuy:

PackIT: animating complicated character groups easily. 67:1 - Greg Mourino

, Mason Evans, Kevin Edzenga, Svetla Cavaleri, Mark Adams, Justin Bisceglio:
Populating the crowds in Ferdinand. 68:1-68:2 - Damien Maupu, Emanuele Goffredo, Nile Hylton, Mungo Pay, Martin Prazák:

Artist-driven crowd authoring tools. 69:1-69:2
Tools of the trade
- Curtis Andrus, Endre Balint, Chong Deng, Simon Coupe:

Optical flow-based face tracking in The Mummy. 70:1-70:2 - Andreas Bauer:

A new contour method for highly detailed geometry. 71:1-71:2 - Xinling Chen, Christopher D. Kulla, Lucas Miller, Alan Chen:

Lighting up the smurfs enchanted forest. 72:1-72:2
Don't be scared - it's only math
- Ken Dahm, Alexander Keller:

Learning light transport the reinforced way. 73:1-73:2 - Ken Museth

:
Novel algorithm for sparse and parallel fast sweeping: efficient computation of sparse signed distance fields. 74:1-74:2 - Ran Dong

, DongSheng Cai
, Nobuyoshi Asai:
Dance motion analysis and editing using hilbert-huang transform. 75:1-75:2 - Danil Nagy:

Nature-based hybrid computational geometry system for optimizing the interior structure of aerospace components. 76:1-76:2
Physical exe stuff
- Nitish Padmanaban

, Robert Konrad, Emily A. Cooper, Gordon Wetzstein
:
Optimizing VR for all users through adaptive focus displays. 77:1-77:2 - Nathan Matsuda, Alexander Fix, Douglas Lanman:

A case study on raytracing-in-the-loop optimization: focal surface displays. 78:1-78:2 - Konrad Tollmar, Pietro Lungaro, Alfredo Fanghella Valero, Ashutosh Mittal:

Beyond foveal rendering: smart eye-tracking enabled networking (SEEN). 79:1-79:2 - Christian Früh, Avneesh Sud, Vivek Kwatra:

Headset removal for virtual and mixed reality. 80:1-80:2

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














