Skip to content

shreyasr-upenn/NeRF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

NeRF

NeRF: Neural Radiance Fields Implementation in PyTorch

Open Tiny-NeRF in Colab

NeRF (Neural Radiance Fields) is a method for synthesizing photo-realistic images of complex 3D scenes by modeling the volumetric scene as a continuous 3D function. Here is an animation generated by this repository:

Results

enter image description here

Iteration 3000 Loss: 0.0035 PSNR: 24.52 Time: 1.00 secs per iter, 51.94 mins in total

Generated image (left), target image (middle) and PSNR vs. iterations graph spanning over 3000 iterations. The model is able to reach 25 PSNR (Peak Signal to Noise Ratio) after 3000 iterations, and is able to perfectly learn the parameters of the 3D model.

Method

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Ben Mildenhall*1, Pratul P. Srinivasan*1, Matthew Tancik*1, Jonathan T. Barron2, Ravi Ramamoorthi3, Ren Ng1
1UC Berkeley, 2Google Research, 3UC San Diego
*denotes equal contribution
in ECCV 2020 (Oral Presentation, Best Paper Honorable Mention)

A neural radiance field is a simple fully connected network (weights are ~5MB) trained to reproduce input views of a single scene using a rendering loss. The network directly maps from spatial location and viewing direction (5D input) to color and opacity (4D output), acting as the "volume" so we can use volume rendering to differentiably render new views

About

NeRF: Neural Radiance Fields Implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published