Invited talk by Dr. Stavros Diolatzis, Friday 14/10/2022 at 12:30, A56

-
Τμήμα Πληροφορικής & Τηλεπικοινωνιών - Α56

Invited speech to be held on Friday 14/10/2022, at 12:30, at the Department of Informatics & Telecommunications - room A56.Invited speech to be held on Friday 14/10/2022, at 12:30, at the Department of Informatics & Telecommunications - room A56.Invited talk to be held on Friday 14/10/2022, at 12:30, at the Department of Informatics & Telecommunications - room A56.

Speaker: Dr Stavros Diolatzis, GraphDeco, INRIA Sophia Antipolis, France

Title: Learning Radiance Fields: From Global Illumination to Generative Models

Abstract: Creating realistic images of virtual scenes is a process that involves simulating light interactions, traditionally through path tracing and Monte Carlo methods, as light gets transmitted and reflected before reaching the virtual camera.This process is being used extensively in different industries such as movies, video games, physical simulations and architectural design. Monte Carlo methods, in a path tracing context, can handle complex lighting effects but the resulting images are plagued with noise which is reduced by simulating additional paths. This can be computationally expensive and a lot of research has gone into making it more efficient and accurate.

While methods in the past have focused on improving the sampling quality of path tracing, recently neural networks have gained popularity as a way to render synthetic or captured scenes. This shift towards a neural augmented rendering pipeline is reflected in the methods proposed in this thesis where increasingly more aspects of rendering are handled by neural networks. A key component in this task is the scene representation of choice with many alternatives being proposed. We demonstrate how radiance fields are a good fit as they can be used to  reduce noise in traditional path tracing and also be learned efficiently by neural networks.
 
First we propose a method to inject our knowledge of the scene materials in an approximation of radiance fields to improve sampling, especially in scenes with glossy materials. Next we show that when training a network to represent radiance fields for variable scenes, uniform sampling of the scene configurations leads to poor results. Instead we actively explore the space of possible scene configurations and use the network to interactively render variable scenes with hard effects, such as caustics. Even though we use a network for the final rendering, our explicit scene representation vector preserves artistic control over the scene's objects, materials and emitters. Finally we develop a generative model for mesoscale materials with complex structure and appearance. Here we use volumetric radiance fields and we condition our network on geometry and appearance parameters for artistic control of the materials represented, which is crucial in our context. 
 
Keywords: Computer Graphics, Path Tracing, Monte Carlo, Neural Rendering, Radiance Fields