Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Path guiding (Google Summer of Code Project) #2656

Open
wants to merge 59 commits into
base: master
Choose a base branch
from

Conversation

BashPrince
Copy link
Contributor

This PR will be continuously updated and should not be merged at this point.

Here is the current state of the implementation of "Practical Path Guiding" [Müller et.al. 2017]. The reference implementation for Mitsuba can be found here https://github.com/Tom94/practical-path-guiding.

To test the current state the Guided Path Tracing Lighting Engine has to be selected in the UI and a sufficiently high number of passes has to be set (this can be arbitrarily high, the guided path tracer will stop when the sample budget has been reached).

Add new classes GuidedPathTracer, GPTLightingEngine, GPTParameters,
GPTPassCallback and TerminatableRendererController

Add new class GPTLighingEngine adapted from PTLightingEngine.

Add new classes STree and DTreeWrapper which used together form an
SDTree to record and importance sample the radiance distribution in a
scene. Both classes for now have very simple implementation and only
'guide' based on a uniform spherical distribution without recording.

Add new class GuidedPathTracer adapted from PathTracer extending the
path tracing algorithm with path guiding using the SDTree (WIP). NEE
does not work at this point.

Add class GPTParameters to parse path guiding parameters and share
params between multiple path guiding components.

Add class GPTPassCallback to use as call back in the
GenericFrameRenderer for implementing the path guiding pass logic. The
implementation at this point only aborts the render when the maximum
sample budget has been reached. This solution is not ideal since
aborting via a renderer controller is not immediate, so the sample
budget is usually surpassed before stopping. Aborting also skips
post-processing stages.

Add class TerminatableRendererController to allow the GPTPassCallback
to abort the render.

Change GenericFrameRenderer to accept IRendererController.
Add new class PathGuidedSampler derived from BSDFSampler to provide
correct path guided pdf for Direct Lighting MIS.

Change IPassCallback::on_path_end() to return a bool, indicating end of
rendering. If this change is kept, the TerminatableRendererController
and related changes from the last commit should be removed.
Add new GuidedPathTracerPanel to the render settings window. The
render settings of the uniform sampler do not work ideally with path
guiding yet. In the rendercomponents the samples setting is for now
passed to the GPTPassCallback so it can end rendering. The
GPTPassCallback will quit rendering when the number of passes rendered
times the number of samples per pass surpasses the total number of
samples set by the user. The samples per pass setting in the guided path
tracer panel is passed to the UniformPixelRendererFactory. The
passes parameter for the GenericFrameRendererFactory are overwritten
with a high number (10000000) of passes in the rendercomponents, so the
PassManagerFunc's pass loop can go on until stopped by the
GPTPassCallback. This is not working at this point because the passes
parameters are already accessed before the rendercomponents leading to
the remaining render time estimation being completely off and samples
from passes beyond the UI passes setting not contributing to the image.
This can only be prevented by setting an arbitrarily high number of
passes in the UI.
These issues will need a later overhaul but with these caveats in mind
the guided path tracer can be controlled from the UI.
Add GPTVertex to store data for path guiding whenever radiance is added
during path tracing.
Add GPTVertexPath to store a path of multiple such vertices and record
radiance as a path is formed.
Adapt path visitors methods to accept GPTVertexPath from caller and use
it to record computed radiance.
Setting the ScatteringMode for path guiding to Diffuse whenever the
BSDF has a diffuse component. Set it to Glossy otherwise.
GPTPassCallback keeps track of passes and iterations and initiates a new
iteration with twice the number of passes if the current iteration
finished. On a new iteration a variance projection based on the current
frame's accumulated samples is calculated. The renderer either
starts a new SD-tree refinement iteration with double the pass count if
the projection indicates that the variance is continuously decreasing
between iterations, or renders the remaining sample budget if the
variance is projected to not be improved by further SD-tree refinement.

Add VarianceTrackingShadingResultFrameBuffer together with new factory.
The new framebuffer inherits from ShadingResultFrameBuffer and adds a
new channel to keep track of the summed squared samples which are used
for the variance calculation.

Add virtual get_total_channel_count() method to
ishadingresultframebufferfactory interface and derived classes to allow
dynamically determining the number of channels based on the factory.
These should now be used instead of the corresponding static method in
ShadingREsultFrameBuffer.

To make the guided path tracer fully checkpoint compatible it is not
enough to only store the shading buffer contens, there will also have to
be a mechanism added for dumping and reloading the SD-tree.
TerminatableRendererController was added in commit
c04f34a. Aborting a render via this
controller is not immediate since the renderer controllers are polled
asynchronously at fixed time intervals. Aborting via pass callback was
added in commit ad4fd68 making the
TerminatableRendererController unneeded.
Modified headers, comments, definitions and copyrights.
@Mango-3 Mango-3 added the WIP ⚠ label Jul 1, 2019
Change unnecessary BSDF evaluations and if-checks in path guiding.
Fix variance estimation in VarianceTrackingShadingResultFramebuffer.
Previously I divided the sample sum as well as the sum of squared
samples by the weight sum (which is the number of samples for that
pixel), but then divided by the number of samples again for the
estimate. The division is now only done for the estimate.

Add method variance_image() and variance_to_tile() to the factory and
framebuffer to write the variance estimate for each pixel to an image.
This is needed for the optional sample combination between path guiding
iterations.
BashPrince added a commit to BashPrince/appleseed that referenced this pull request Jul 14, 2019
Split from PR appleseedhq#2656 with variance estimation fix to keep reviews small.

Contains the added VarianceTrackingShadingResultFramebuffer and matching
factory, which keep track of the sum of squared samples to allow per
pixel variance estimation.

Also adapted the ishadingresultframebufferfactory interface and derived
classes to allow dynamic calls to the get_total_channel_count() method
for checkpoints.
Add several options to assign ScatteringModes to path guided bounces.
Create UI controls to set these options.
Fix bug for guiding bounce mode Learn where ScatteringMode::None was
not terminated leading to invalid NaN samples.

Fix guided pdf evaluation bug where positive pdf values where returned
even for specific sampling modes that where disabled.
Change handling of division of incoming radiance by throughput and pdf.

Keep track of average bsdf sampling fraction in statistics.
Accidentally reintroduced sampling fraction bug (dividing product
estimate by pdf) in previous commit.
Previously only first four main color channels were combined in the
final image weighting. Now all color channels of the frame are included.
Only do ScatteringMode assignment algorithm when corresponding mode is
enabled.

Only sort in ratios one before leaf node to speed up the algorithm.
Add empty space after if, for, etc.

Break up some long lines.
Add static type casts in SD-tree.
Casting size_t arguments for std::pow to float instead of int.
@BashPrince
Copy link
Contributor Author

BashPrince commented Aug 23, 2019

Settings for the path guiding UI.

Settings

  • Samples per pass: Controls the number of samples of one path guiding pass. The actual number of samples per pixel is set as usual in the Uniform Sampler settings. For expected results the user has to make sure that the number of passes in the sampler settings is sufficiently high for the sample number.
    Example: 1024 samples per pixel at 4 samples per pass = 256 passes in the sampler.
    It's all a bit convoluted for now I'd still like to improve that.
  • Spatial Filter: Sets the spatial S-Tree filter type. Recommended is stochastic.
  • Directional Filter: Sets the directional D-Tree filter type. Recommended is box.
  • BSDF Sampling Fraction Mode: Set the ratio mode between ordinary sampling and SD-Tree sampling to a fixed fraction or utilize the automatic learning of the ideal fraction.
  • Fixed BSDF Sampling Fraction: Set the fraction if sampling fraction mode is Fixed
  • Learnig Rate: Set the learning rate if sampling fraction mode is Learn
  • Iteration Progression: Either combine the images across iteration based on their estimated inverse variance (Combine Iterations) or initiate a final iteration when the projected variance of the final image increases (Automatic Cut-off)
  • Guided Bounce Mode: To fit within our path tracing routine a scattering mode should be chosen at a path extension. Since the path guided bounces replace ordinary BSDF sampling, a mode has to be selected. This setting gives some control over how that should happen.

"Learned Distribution" will calculate how spread out over the sphere the incoming radiance at any D-Tree's spatial location is. If >40% of the energy in the last iteration are contained within <= 10% of the spherical directions the D-Tree returns Glossy for a direction sample and Diffuse otherwise. Ideally fireflies caused by spikey D-tree distributions can be prevented (if caustics are disabled).
The remaining modes will strictly (Strictly Diffuse / Strictly Specular) assign a user chosen mode on path guided bounces (or terminate if the remaining path scattering modes do not allow bounces of that type) or preferably assign the chosen mode and assign the other mode otherwise (Prefer Diffuse / Prefer Specular).

One issue with assigning the scattering mode based on the local distribution is that this assignment is likely to change across iterations so you would terminate paths differently across different iterations (unless all bounce types are unlimited) and introduce brightness changes that will be combined in the final image weighting. If a strict mode is chosen it might be not the ideal mode with regard to terminating caustics and also the regular path tracer would be starved for bounces of that mode. I think none of these options are ideal. This will require some more future thought.

To get an unbiased comparison to path tracing all bounce modes should go to the full path length. In some scenes, especially with hard to reach light sources, it is also crucial to disable russian roulette by setting its start past the total path length.

At the current moment rendering checkpoints will not work with path guiding since this will require a method to integrate storing and loading the SD-tree to / from disk. Saving/loading is not really be an issue but doing so will require integration with the rest of the checkpoint mechanic.

There are still some issues with the path guiding algorithm that I will have to investigate further. One such issue are prominent fireflies that show up with path guiding.

@BashPrince BashPrince changed the title Path guiding Path guiding (Google Summer of Code Project) Aug 23, 2019
Do not learn distribution of direct incoming light when it is being
sampled with NEE.

Increase energy/area ratio for a D-Tree to be considered glossy.

Fix a SD-Tree statistics bug.
Change subdivision criterion to value in the Siggraph path guiding
course notes.
Add min/max bsdf sampling fraction and number of D-Trees.
This commit marks the final commit for my work on Google Summer of Code
2019. This PR will continue to be worked on beyond this point.
@BashPrince
Copy link
Contributor Author

Last commit was accidental and should be removed.

Add functionality to save the SD-tree to disk in a format compatible
with the visualizer tool https://github.com/Tom94/practical-path-guiding

Add UI settings to specify a path and which iterations should be saved.
Fix uncomment in message output in previous commit.
Enable or disable the SD-tree saving UI components (path and browse
button) after loading current mode in directly linked values.
Handle Spectrum validity checks and multiplications in single loop.
Print weights of each images contribution on completed render.

Cleanup iteration combination.
Reorder statistics output and rename some statistics variables.
@elite-sheep
Copy link
Contributor

Hi @BashPrince Are you still working on merging this commit into the master branch? I saw Path Guiding is a topic in the idea list of GsOC 2020 and want to make it my topic. I am curious what's going on your project.

@BashPrince
Copy link
Contributor Author

Hey,
Yes I'm still working on it, currently I'm trying to extend the algorithm for my bachelor thesis (working on it in a separate branch). I'll then still have to integrate all of the results. The core algorithm already works but still needs quite a bit of code overhaul.

Build fixes
Fix bug where BSDFSample can have valid mode with zero pdf after sd-tree sampling in PathGuidedSampler.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants