default search action
50th SIGGRAPH 2023: Los Angeles, CA, USA - Conference Paper Track
- Erik Brunvand, Alla Sheffer, Michael Wimmer:
ACM SIGGRAPH 2023 Conference Proceedings, SIGGRAPH 2023, Los Angeles, CA, USA, August 6-10, 2023. ACM 2023
Geometric Optimization
- Ruben Wiersma, Ahmad Nasikun, Elmar Eisemann, Klaus Hildebrandt:
A Fast Geometric Multigrid Method for Curved Surfaces. 1:1-1:11 - Michal Edelstein, Nestor Guillen, Justin Solomon, Mirela Ben-Chen:
A Convex Optimization Framework for Regularized Geodesic Distances. 2:1-2:11 - Eugene d'Eon, Benedikt Bitterli, Andrea Weidlich, Tizian Zeltner:
Microfacet Theory for Non-Uniform Heightfields. 3:1-3:10 - Yiwei Hu, Paul Guerrero, Milos Hasan, Holly E. Rushmeier, Valentin Deschaintre:
Generating Procedural Materials from Text or Image Prompts. 4:1-4:11 - Junqiu Zhu, Adrián Jarabo, Carlos Aliaga, Ling-Qi Yan, Matt Jen-Yuan Chiang:
A Realistic Surface-based Cloth Rendering Model. 5:1-5:9
Motion Recipes and Simulation
- Jungnam Park, Moon Seok Park, Jehee Lee, Jungdam Won:
Bidirectional GaitNet: A Bidirectional Prediction Model of Human Gait and Anatomical Conditions. 6:1-6:9 - Kartik Chandra, Tzu-Mao Li, Joshua B. Tenenbaum, Jonathan Ragan-Kelley:
Acting as Inverse Inverse Planning. 7:1-7:12 - Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu:
PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling. 8:1-8:11 - Soobin Lim, Seungwon Seo, HyeongYeop Kang:
DARAM: Dynamic Avatar-Human Motion Remapping Technique for Realistic Virtual Stair Ascending Motions. 9:1-9:11
Character Animation: Knowing What to do with Your Hands
- Sirui Chen, Albert Wu, C. Karen Liu:
Synthesizing Dexterous Nonprehensile Pregrasp for Ungraspable Objects. 10:1-10:10
Image and Video Editing
- Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, Jun-Yan Zhu:
Zero-shot Image-to-Image Translation. 11:1-11:11 - Yoad Tewel, Rinon Gal, Gal Chechik, Yuval Atzmon:
Key-Locked Rank One Editing for Text-to-Image Personalization. 12:1-12:11
Geometric Abstractions: Not Just for Cubists
- Honglin Chen, Changxi Zheng, Kevin Wampler:
Local Deformation for Interactive Shape Editing. 13:1-13:10
Magical Sketching
- Zhengyu Huang, Haoran Xie, Tsukasa Fukusato, Kazunori Miyata:
AniFaceDrawing: Anime Portrait Exploration during Your Sketching. 14:1-14:11
XR Displays and Perception: Seeing What's in Front of Your Eyes
- Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, Nathan Matsuda:
Perspective-Correct VR Passthrough Without Reprojection. 15:1-15:9
Procedural Modeling
- Guoqing Yang, Fuyou Xue, Qi Zhang, Ke Xie, Chi-Wing Fu, Hui Huang:
UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building Instance Segmentation. 16:1-16:11
Inverse Rendering: Does Anybody Know How I Got Here?
- Saeed Hadadan, Geng Lin, Jan Novák, Fabrice Rousselle, Matthias Zwicker:
Inverse Global Illumination using a Neural Radiometric Prior. 17:1-17:11 - Wesley Chang, Venkataram Sivaram, Derek Nowrouzezahrai, Toshiya Hachisuka, Ravi Ramamoorthi, Tzu-Mao Li:
Parameter-space ReSTIR for Differentiable and Inverse Rendering. 18:1-18:10 - Xiaochun Tong, Hsueh-Ti Derek Liu, Yotam I. Gingold, Alec Jacobson:
Differentiable Heightfield Path Tracing with Accelerated Discontinuities. 19:1-19:9
Fabricating Appearance
- Kenji Tojo, Ariel Shamir, Bernd Bickel, Nobuyuki Umetani:
Stealth Shaper: Reflectivity Optimization as Surface Stylization. 20:1-20:10 - Jorge Condor, Michal Piovarci, Bernd Bickel, Piotr Didyk:
Gloss-Aware Color Correction for 3D Printing. 21:1-21:11
Contours, Conformality, Coarsening, and Coordinates
- Alexandros Dimitrios Keros, Kartic Subr:
Spectral Coarsening with Hodge Laplacians. 22:1-22:11 - Élie Michel, Jean-Marc Thiery:
Polynomial 2D Green Coordinates for Polygonal Cages. 23:1-23:9
Thin and Thinner: Modeling Shells and Hair
- Gilles Daviet:
Interactive Hair Simulation on the GPU using ADMM. 24:1-24:11 - Yunuo Chen, Tianyi Xie, Cem Yuksel, Danny M. Kaufman, Yin Yang, Chenfanfu Jiang, Minchen Li:
Multi-Layer Thick Shells. 25:1-25:9
Full-Body XR: Beyond the Headset
- Simeng Qiu, Hadi Amata, Wolfgang Heidrich:
MoiréTag: Angular Measurement and Tracking with a Passive Marker. 26:1-26:10 - Yunxiang Zhang, Kenneth Chen, Qi Sun:
Toward Optimized VR/AR Ergonomics: Modeling and Predicting User Neck Muscle Contraction. 27:1-27:12 - Haiwei Zhang, Jiqing Zhang, Bo Dong, Pieter Peers, Wenwei Wu, Xiaopeng Wei, Felix Heide, Xin Yang:
In the Blink of an Eye: Event-based Emotion Recognition. 28:1-28:11
Neural Light Transport
- Honghao Dong, Guoping Wang, Sheng Li:
Neural Parametric Mixtures for Path Guiding. 29:1-29:10 - Alexander Rath, Ömercan Yazici, Philipp Slusallek:
Focal Path Guiding for Light Transport Simulation. 30:1-30:10 - Zackary Misso, Yining Karl Li, Brent Burley, Daniel Teece, Wojciech Jarosz:
Progressive null-tracking for volumetric rendering. 31:1-31:10 - Arthur Firmino, Jeppe Revall Frisvad, Henrik Wann Jensen:
Denoising-Aware Adaptive Sampling for Monte Carlo Ray Tracing. 32:1-32:11
Diffusion for Geometry
- Hui Ying, Tianjia Shao, He Wang, Yin Yang, Kun Zhou:
Adaptive Local Basis Functions for Shape Completion. 33:1-33:11 - Eric Heitz, Laurent Belcour, Thomas Chambon:
Iterative α -(de)Blending: a Minimalist Deterministic Diffusion Model. 34:1-34:8 - Cusuh Ham, James Hays, Jingwan Lu, Krishna Kumar Singh, Zhifei Zhang, Tobias Hinz:
Modulating Pretrained Diffusion Models for Multimodal Image Synthesis. 35:1-35:11 - Wamiq Reyaz Para, Paul Guerrero, Niloy J. Mitra, Peter Wonka:
COFS: COntrollable Furniture layout Synthesis. 36:1-36:11
Character Animation
- Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue Bin Peng:
CALM: Conditional Adversarial Latent Models for Directable Virtual Characters. 37:1-37:9 - Xiangjun Tang, Linjun Wu, He Wang, Bo Hu, Xu Gong, Yuchen Liao, Songnan Li, Qilong Kou, Xiaogang Jin:
RSMT: Real-time Stylized Motion Transition for Characters. 38:1-38:10
Colorful Topics in Imaging
- Ryan Capouellez, Jiacheng Dai, Aaron Hertzmann, Denis Zorin:
Algebraic Smooth Occluding Contours. 39:1-39:10
Most Def: Fast, Large and Learned Deformables
- Nicholas Sharp, Cristian Romero, Alec Jacobson, Etienne Vouga, Paul G. Kry, David I. W. Levin, Justin Solomon:
Data-Free Learning of Reduced-Order Kinematics. 40:1-40:9
Material Rendering
- Bing Xu, Liwen Wu, Milos Hasan, Fujun Luan, Iliyan Georgiev, Zexiang Xu, Ravi Ramamoorthi:
NeuSample: Importance Sampling for Neural Materials. 41:1-41:10 - Mark van de Ruit, Elmar Eisemann:
Metameric: Spectral Uplifting via Controllable Color Constraints. 42:1-42:10 - Yihua Huang, Yan-Pei Cao, Yu-Kun Lai, Ying Shan, Lin Gao:
NeRF-Texture: Texture Synthesis with Neural Radiance Fields. 43:1-43:10
Surface Reconstruction
- Yuanqi Li, Shun Liu, Xinran Yang, Jianwei Guo, Jie Guo, Yanwen Guo:
Surface and Edge Detection for Primitive Fitting of Point Clouds. 44:1-44:10 - Tong Zhao, Laurent Busé, David Cohen-Steiner, Tamy Boubekeur, Jean-Marc Thiery, Pierre Alliez:
Variational Shape Reconstruction via Quadric Error Metrics. 45:1-45:10
Neural Capturing
- Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall:
BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis. 46:1-46:9 - Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, Yebin Liu:
AvatarMAV: Fast 3D Head Avatar Reconstruction Using Motion-Aware Neural Voxels. 47:1-47:10 - Lianghao Zhang, Fangzhou Gao, Li Wang, Minjing Yu, Jiamin Cheng, Jiawan Zhang:
Deep SVBRDF Estimation from Single Image under Learned Planar Lighting. 48:1-48:11 - Xilong Zhou, Milos Hasan, Valentin Deschaintre, Paul Guerrero, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Nima Khademi Kalantari:
PhotoMat: A Material Generator Learned from Single Flash Photos. 49:1-49:11
Fabrication-Oriented Design
- Élie Michel, Tamy Boubekeur:
MesoGen: Designing Procedural On-Surface Stranded Mesostructures. 50:1-50:10 - Maxine Perroni-Scharf, Szymon Rusinkiewicz:
Constructing Printable Surfaces with View-Dependent Appearance. 51:1-51:10
All About Meshes
- Jiong Chen, Fernando de Goes, Mathieu Desbrun:
Somigliana Coordinates: an elasticity-derived approach for cage deformation. 52:1-52:8 - Yucheol Jung, Hyomin Kim, Gyeongha Hwang, Seung-Hwan Baek, Seungyong Lee:
Mesh Density Adaptation for Template-based Shape Reconstruction. 53:1-53:10
Text-Guided Generation
- Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or:
TEXTure: Text-Guided Texturing of 3D Shapes. 54:1-54:11 - Andrey Voynov, Kfir Aberman, Daniel Cohen-Or:
Sketch-Guided Text-to-Image Diffusion Models. 55:1-55:11 - Anran Lin, Nanxuan Zhao, Shuliang Ning, Yuda Qiu, Baoyuan Wang, Xiaoguang Han:
FashionTex: Controllable Virtual Try-on with Text and Texture. 56:1-56:9 - Chenliang Zhou, Fangcheng Zhong, Cengiz Öztireli:
CLIP-PAE: Projection-Augmentation Embedding to Extract Relevant Features for a Disentangled, Interpretable and Controllable Text-Guided Face Manipulation. 57:1-57:9
Marvelous Mappings
- Florine Hartwig, Josua Sassen, Omri Azencot, Martin Rumpf, Mirela Ben-Chen:
An Elastic Basis for Spectral Shape Correspondence. 58:1-58:11
Real-Time Rendering: Gotta Go Fast!
- Karlis Martins Briedis, Abdelaziz Djelouah, Raphaël Ortiz, Mark Meyer, Markus Gross, Christopher Schroers:
Kernel-Based Frame Interpolation for Spatio-Temporally Adaptive Rendering. 59:1-59:11 - Martin Bálint, Krzysztof Wolski, Karol Myszkowski, Hans-Peter Seidel, Rafal Mantiuk:
Neural Partitioning Pyramids for Denoising Monte Carlo Renderings. 60:1-60:11 - Jinkai Hu, Chengzhong Yu, Hongli Liu, Lingqi Yan, Yiqian Wu, Xiaogang Jin:
Deep Real-time Volumetric Rendering Using Multi-feature Fusion. 61:1-61:10
Character Animation: Interaction
- Sunmin Lee, Sebastian Starke, Yuting Ye, Jungdam Won, Alexander W. Winkler:
QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse Sensors. 62:1-62:9 - Mohamed Hassan, Yunrong Guo, Tingwu Wang, Michael J. Black, Sanja Fidler, Xue Bin Peng:
Synthesizing Physical Character-Scene Interactions. 63:1-63:9 - Jinseok Bae, Jungdam Won, Donggeun Lim, Cheol-Hui Min, Young Min Kim:
PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors. 64:1-64:10 - Yunbo Zhang, Deepak Gopinath, Yuting Ye, Jessica K. Hodgins, Greg Turk, Jungdam Won:
Simulation and Retargeting of Complex Multi-Character Interactions. 65:1-65:11
Making Faces with Neural Avatars
- Kaisiyuan Wang, Hang Zhou, Qianyi Wu, Jiaxiang Tang, Zhiliang Xu, Borong Liang, Tianshu Hu, Errui Ding, Jingtuo Liu, Ziwei Liu, Jingdong Wang:
Efficient Video Portrait Reenactment via Grid-based Codebook. 66:1-66:9 - Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, Yebin Liu:
StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video. 67:1-67:10 - Dafei Qin, Jun Saito, Noam Aigerman, Thibault Groueix, Taku Komura:
Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild. 68:1-68:11 - Dann Mensah, Nam Hee Kim, Miika Aittala, Samuli Laine, Jaakko Lehtinen:
A Hybrid Generator Architecture for Controllable Face Synthesis. 69:1-69:10 - Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner:
ClipFace: Text-guided Editing of Textured 3D Morphable Models. 70:1-70:11
Environment Rendering: NeRFs on Earth
- Nagabhushan Somraj, Rajiv Soundararajan:
ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields. 71:1-71:11 - Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Justin Kerr, Angjoo Kanazawa:
Nerfstudio: A Modular Framework for Neural Radiance Field Development. 72:1-72:12 - Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong:
Relighting Neural Radiance Fields with Shadow and Highlight Hints. 73:1-73:11 - Tong Wu, Jia-Mu Sun, Yu-Kun Lai, Lin Gao:
DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting. 74:1-74:11
Fabulous Fabrication: From Knitting to Circuits
- Rahul Mitra, Liane Makatura, Emily Whiting, Edward Chien:
Helix-Free Stripes for Knit Graph Design. 75:1-75:9
Making Contact: Simulating and Detecting Collisions
- Paul Zhang, Zoë Marschner, Justin Solomon, Rasmus Tamstorf:
Sum-of-Squares Collision Detection for Curved Shapes and Paths. 76:1-76:11 - Zachary Ferguson, Pranav Jain, Denis Zorin, Teseo Schneider, Daniele Panozzo:
High-Order Incremental Potential Contact for Elastodynamic Simulation on Curved Meshes. 77:1-77:11
Neural Image Generation and Editing
- Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, Christian Theobalt:
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. 78:1-78:11 - Adéla Subrtová, Michal Lukác, Jan Cech, David Futschik, Eli Shechtman, Daniel Sýkora:
Diffusion Image Analogies. 79:1-79:10
Material Acquisition
- James C. Bieron, Xin Tong, Pieter Peers:
Single Image Neural Material Relighting. 80:1-80:11 - Jiahui Fan, Beibei Wang, Milos Hasan, Jian Yang, Ling-Qi Yan:
Neural Biplane Representation for BTF Rendering and Acquisition. 81:1-81:11
Deep Geometric Learning
- William Gao, Noam Aigerman, Thibault Groueix, Vova Kim, Rana Hanocka:
TextDeformer: Geometry Manipulation using Text Guidance. 82:1-82:11 - Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas J. Guibas, Gordon Wetzstein, Sameh Khamis:
Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization. 83:1-83:12 - Yun-Chun Chen, Vladimir G. Kim, Noam Aigerman, Alec Jacobson:
Neural Progressive Meshes. 84:1-84:9
NeRFs for Avatars
- Wangbo Yu, Yanbo Fan, Yong Zhang, Xuan Wang, Fei Yin, Yunpeng Bai, Yan-Pei Cao, Ying Shan, Yang Wu, Zhongqian Sun, Baoyuan Wu:
NOFA: NeRF-based One-shot Facial Avatar Reconstruction. 85:1-85:12 - Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, Guojun Qi, Yebin Liu:
LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar. 86:1-86:10
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.