Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Path guiding (Google Summer of Code Project) #2656

Open
wants to merge 59 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
c04f34a
Commit initial path guiding sources
BashPrince Jun 12, 2019
ad4fd68
Add PathGuidedSampler for SD-Tree sampling
BashPrince Jun 14, 2019
748eee8
Add path guiding UI settings
BashPrince Jun 14, 2019
e2c828b
Add radiance recording infrastructure
BashPrince Jun 16, 2019
8063616
Merge branch 'master' into path_guiding
BashPrince Jun 16, 2019
eb3046b
Adapt path guiding scattering mode
BashPrince Jun 20, 2019
d57875f
Add pass callback logic
BashPrince Jun 28, 2019
02f7111
Remove TerminatableRendererController
BashPrince Jun 30, 2019
565b360
Cosmetic cleanup
BashPrince Jul 1, 2019
2416d26
Merge branch 'master' into path_guiding
BashPrince Jul 1, 2019
722046f
Small changes in PathGuidedSampler
BashPrince Jul 11, 2019
1dec8ec
Fix CI build errors
BashPrince Jul 11, 2019
db55f23
Merge branch 'master' into path_guiding
BashPrince Jul 11, 2019
c0180ce
Fix variance estimation
BashPrince Jul 14, 2019
e95c61d
Port Mitsuba implementation of SD-tree
BashPrince Jul 21, 2019
da67353
Merge branch 'master' into path_guiding_tree
BashPrince Jul 21, 2019
b1acd42
Implement MySTreeNode
BashPrince Jul 22, 2019
5dc0414
Make choose_node method private
BashPrince Jul 22, 2019
6fd1b87
Further modifications to MySTreeNode
BashPrince Jul 22, 2019
91d130f
Add spatial box filter
BashPrince Jul 23, 2019
bfaa8e8
SD tree implementation WIP
BashPrince Aug 2, 2019
ae5dd51
Backup WIP
BashPrince Aug 4, 2019
70772c1
Ported mitsuba learned bsdf sampling fraction
BashPrince Aug 4, 2019
327c8b5
fraction learning WIP
BashPrince Aug 6, 2019
4be593b
Small changes
BashPrince Aug 7, 2019
98912ab
Add GUI settings and sample combination
BashPrince Aug 12, 2019
eec3e7d
Fix BSDF sampling fraction learning
BashPrince Aug 14, 2019
cc37be3
Conform to styleguide
BashPrince Aug 14, 2019
71a36a3
Changes in PathGuidedSampler
BashPrince Aug 15, 2019
a61d06c
Further changes to PathguidedSampler
BashPrince Aug 16, 2019
b462be0
Several path guiding changes
BashPrince Aug 16, 2019
b8bd774
Various path guiding changes
BashPrince Aug 19, 2019
f7abdc5
Fix bounce parameter GUI bug
BashPrince Aug 20, 2019
577315a
D-Tree samples return ScatteringMode
BashPrince Aug 21, 2019
bd724d5
Working version
BashPrince Aug 21, 2019
03e2655
Improve handling of guided bounce ScatteringMode
BashPrince Aug 21, 2019
64526c2
Fix path guiding ScatteringMode bugs
BashPrince Aug 22, 2019
65e26a0
Merge branch 'master' into path_guiding
BashPrince Aug 22, 2019
b0b128d
Minor changes in STree class
BashPrince Aug 22, 2019
b11cf8e
Fix BSDF sampling fraction bug
BashPrince Aug 22, 2019
be9672e
Combine all channels in inverse variance mixing
BashPrince Aug 22, 2019
bca5a9c
Improve D-Tree ScatteringMode assignment
BashPrince Aug 23, 2019
da1b77e
Cosmetic code changes
BashPrince Aug 23, 2019
d01bad3
Fixes to pass CI
BashPrince Aug 23, 2019
5b46598
More fixes to pass CI
BashPrince Aug 23, 2019
32641d6
Add indirect radiance only when NEE is enabled
BashPrince Aug 25, 2019
20f1cff
Change S-Tree subdivision criterion
BashPrince Aug 25, 2019
4d9b480
Add more SD-Tree statistics
BashPrince Aug 25, 2019
615478e
Final commit for GSoC 2019
BashPrince Aug 25, 2019
16a7428
add breaks
BashPrince Aug 25, 2019
90ddf4b
Add SD-tree disk writing
BashPrince Sep 28, 2019
e3fa458
Merge branch 'master' into path_guiding
BashPrince Oct 3, 2019
569046b
Fix previous error in ProgressTileCallback
BashPrince Oct 3, 2019
3513e47
Correct save iterations mode on UI setup
BashPrince Oct 3, 2019
26bd8c7
Cleanup GPTVertex::record_to_tree
BashPrince Oct 3, 2019
18a42de
Print inverse variance sample combination weights
BashPrince Oct 4, 2019
5487e2b
Reformat statistics printing
BashPrince Oct 4, 2019
bac706e
Merge remote-tracking branch 'upstream/master' into path_guiding
BashPrince Apr 2, 2020
d1cbd9e
Build and bug fixes after merging master
BashPrince Apr 2, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Ported mitsuba learned bsdf sampling fraction
  • Loading branch information
BashPrince committed Aug 4, 2019
commit 70772c18dce5cfade41921657dfb8c55df04e5fa
16 changes: 10 additions & 6 deletions src/appleseed/renderer/kernel/lighting/gpt/gptlightingengine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -592,7 +592,8 @@ namespace
vertex.m_bsdf_data,
vertex.m_scattering_modes,
vertex_radiance,
m_light_path_stream);
m_light_path_stream,
guided_path.get_sampling_fraction());
}
}

Expand All @@ -608,7 +609,8 @@ namespace
vertex.m_bsdf_data,
vertex.m_scattering_modes,
vertex_radiance,
m_light_path_stream);
m_light_path_stream,
guided_path.get_sampling_fraction());
}
}

Expand Down Expand Up @@ -661,7 +663,8 @@ namespace
const void* bsdf_data,
const int scattering_modes,
DirectShadingComponents& vertex_radiance,
LightPathStream* light_path_stream)
LightPathStream* light_path_stream,
const float sampling_fraction)
{
DirectShadingComponents dl_radiance;

Expand All @@ -675,7 +678,7 @@ namespace

const PathGuidedSampler path_guided_sampler(
m_sd_tree->get_d_tree(foundation::Vector3f(shading_point.get_point())),
m_params.m_bsdf_sampling_fraction,
sampling_fraction,
bsdf,
bsdf_data,
scattering_modes, // bsdf_sampling_modes (unused)
Expand Down Expand Up @@ -715,7 +718,8 @@ namespace
const void* bsdf_data,
const int scattering_modes,
DirectShadingComponents& vertex_radiance,
LightPathStream* light_path_stream)
LightPathStream* light_path_stream,
const float sampling_fraction)
{
DirectShadingComponents ibl_radiance;

Expand All @@ -726,7 +730,7 @@ namespace

const PathGuidedSampler path_guided_sampler(
m_sd_tree->get_d_tree(foundation::Vector3f(shading_point.get_point())),
m_params.m_bsdf_sampling_fraction,
sampling_fraction,
bsdf,
bsdf_data,
scattering_modes, // bsdf_sampling_modes (unused)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ bool PathGuidedSampler::guide_path_extension(
true, // multiply by |cos(incoming, normal)|
m_bsdf_sampling_modes,
bsdf_sample);
wo_pdf = bsdf_sample.get_probability();
wo_pdf = bsdf_pdf = bsdf_sample.get_probability();
d_tree_pdf = 0;
return is_path_guided;
}
Expand Down
9 changes: 6 additions & 3 deletions src/appleseed/renderer/kernel/lighting/guidedpathtracer.h
Original file line number Diff line number Diff line change
Expand Up @@ -715,20 +715,23 @@ bool GuidedPathTracer<PathVisitor, VolumeVisitor, Adjoint>::process_bounce(
ShadingRay& next_ray,
GPTVertexPath& guided_path)
{
foundation::Vector3f voxel_size;
DTree *d_tree = m_sd_tree->get_d_tree(foundation::Vector3f(vertex.get_point()), voxel_size);
const float sampling_fraction = d_tree->bsdfSamplingFraction();
guided_path.set_sampling_fraction(sampling_fraction);

// Let the path visitor handle the scattering event.
m_path_visitor.on_scatter(vertex, guided_path);

// Terminate the path if all scattering modes are disabled.
if (vertex.m_scattering_modes == ScatteringMode::None)
return false;

foundation::Vector3f voxel_size;
DTree* d_tree = m_sd_tree->get_d_tree(foundation::Vector3f(vertex.get_point()), voxel_size);
float wo_pdf, bsdf_pdf, d_tree_pdf;

PathGuidedSampler sampler(
d_tree,
m_bsdf_sampling_fraction,
sampling_fraction,
*vertex.m_bsdf,
vertex.m_bsdf_data,
vertex.m_scattering_modes,
Expand Down
184 changes: 181 additions & 3 deletions src/appleseed/renderer/kernel/lighting/sdtree.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,10 +55,98 @@ static void atomic_add(std::atomic<float>& atomic, const float value)
;
}

inline float logistic(float x)
{
return 1 / (1 + std::exp(-x));
}

foundation::Vector3f cylindrical_to_cartesian(const foundation::Vector2f &cylindrical_direction);

foundation::Vector2f cartesian_to_cylindrical(const foundation::Vector3f &d);

// Implements the stochastic-gradient-based Adam optimizer [Kingma and Ba 2014]
class AdamOptimizer
{
public:
AdamOptimizer(float learningRate, int batchSize = 1, float epsilon = 1e-08f, float beta1 = 0.9f, float beta2 = 0.999f)
{
m_state.iter = 0;
m_state.firstMoment = 0;
m_state.secondMoment = 0;
m_state.variable = 0;
m_state.batchAccumulation = 0;
m_state.batchGradient = 0;
m_hparams = {learningRate, batchSize, epsilon, beta1, beta2};
}

AdamOptimizer &operator=(const AdamOptimizer &arg)
{
m_state = arg.m_state;
m_hparams = arg.m_hparams;
return *this;
}

AdamOptimizer(const AdamOptimizer &arg)
{
*this = arg;
}

void append(float gradient, float statisticalWeight)
{
m_state.batchGradient += gradient * statisticalWeight;
m_state.batchAccumulation += statisticalWeight;

if (m_state.batchAccumulation > m_hparams.batchSize)
{
step(m_state.batchGradient / m_state.batchAccumulation);

m_state.batchGradient = 0;
m_state.batchAccumulation = 0;
}
}

void step(float gradient)
{
++m_state.iter;

float actualLearningRate = m_hparams.learningRate * std::sqrt(1 - std::pow(m_hparams.beta2, m_state.iter)) / (1 - std::pow(m_hparams.beta1, m_state.iter));
m_state.firstMoment = m_hparams.beta1 * m_state.firstMoment + (1 - m_hparams.beta1) * gradient;
m_state.secondMoment = m_hparams.beta2 * m_state.secondMoment + (1 - m_hparams.beta2) * gradient * gradient;
m_state.variable -= actualLearningRate * m_state.firstMoment / (std::sqrt(m_state.secondMoment) + m_hparams.epsilon);

// Clamp the variable to the range [-20, 20] as a safeguard to avoid numerical instability:
// since the sigmoid involves the exponential of the variable, value of -20 or 20 already yield
// in *extremely* small and large results that are pretty much never necessary in practice.
m_state.variable = std::min(std::max(m_state.variable, -20.0f), 20.0f);
}

float variable() const
{
return m_state.variable;
}

private:
struct State
{
int iter;
float firstMoment;
float secondMoment;
float variable;

float batchAccumulation;
float batchGradient;
} m_state;

struct Hyperparameters
{
float learningRate;
int batchSize;
float epsilon;
float beta1;
float beta2;
} m_hparams;
};

class QuadTreeNode
{
public:
Expand Down Expand Up @@ -379,14 +467,15 @@ class DTree
: m_root_node(true)
, m_current_iter_sample_weight(0.0f)
, m_previous_iter_sample_weight(0.0f)
, bsdfSamplingFractionOptimizer(0.01f)
{}

DTree(const DTree& other)
: m_current_iter_sample_weight(other.m_current_iter_sample_weight.load(std::memory_order_relaxed))
, m_previous_iter_sample_weight(other.m_previous_iter_sample_weight)
, m_root_node(other.m_root_node)
{
}
, bsdfSamplingFractionOptimizer(other.bsdfSamplingFractionOptimizer)
{}

void record(const DTreeRecord& d_tree_record)
{
Expand Down Expand Up @@ -422,6 +511,11 @@ class DTree
default:
break;
}

if (d_tree_record.product > 0)
{
optimizeBsdfSamplingFraction(d_tree_record, 1.0f);
}
}

void sample(SamplingContext& sampling_context, DTreeSample& d_tree_sample) const
Expand Down Expand Up @@ -500,10 +594,83 @@ class DTree
return m_root_node.radiance_sum() * (1.0f / m_previous_iter_sample_weight) * foundation::RcpFourPi<float>();
}

private:
inline float bsdfSamplingFraction(float variable) const
{
return logistic(variable);
}

inline float dBsdfSamplingFraction_dVariable(float variable) const
{
float fraction = bsdfSamplingFraction(variable);
return fraction * (1 - fraction);
}

inline float bsdfSamplingFraction() const
{
return bsdfSamplingFraction(bsdfSamplingFractionOptimizer.variable());
}

void optimizeBsdfSamplingFraction(const DTreeRecord &rec, float ratioPower)
{
m_lock.lock();

// GRADIENT COMPUTATION
float variable = bsdfSamplingFractionOptimizer.variable();
float samplingFraction = bsdfSamplingFraction(variable);

// Loss gradient w.r.t. sampling fraction
float mixPdf = samplingFraction * rec.bsdf_pdf + (1 - samplingFraction) * rec.d_tree_pdf;
float ratio = std::pow(rec.product / mixPdf, ratioPower);
float dLoss_dSamplingFraction = -ratio / rec.wo_pdf * (rec.bsdf_pdf - rec.d_tree_pdf);

// Chain rule to get loss gradient w.r.t. trainable variable
float dLoss_dVariable = dLoss_dSamplingFraction * dBsdfSamplingFraction_dVariable(variable);

// We want some regularization such that our parameter does not become too big.
// We use l2 regularization, resulting in the following linear gradient.
float l2RegGradient = 0.01f * variable;

float lossGradient = l2RegGradient + dLoss_dVariable;

// ADAM GRADIENT DESCENT
bsdfSamplingFractionOptimizer.append(lossGradient, rec.sample_weight);

m_lock.unlock();
}

private:
QuadTreeNode m_root_node;
std::atomic<float> m_current_iter_sample_weight;
float m_previous_iter_sample_weight;

AdamOptimizer bsdfSamplingFractionOptimizer;

class SpinLock
{
public:
SpinLock()
{
m_mutex.clear(std::memory_order_release);
}

SpinLock(const SpinLock &other) { m_mutex.clear(std::memory_order_release); }
SpinLock &operator=(const SpinLock &other) { return *this; }

void lock()
{
while (m_mutex.test_and_set(std::memory_order_acquire))
{
}
}

void unlock()
{
m_mutex.clear(std::memory_order_release);
}

private:
std::atomic_flag m_mutex;
} m_lock;
};

struct DTreeStatistics
Expand Down Expand Up @@ -864,6 +1031,17 @@ class GPTVertexPath

bool is_full() const;

void set_sampling_fraction(const float sampling_fraction)
{
m_sampling_fraction = sampling_fraction;
}

float get_sampling_fraction() const
{
return m_sampling_fraction;
}


private:
std::array<GPTVertex, 32> path;
int index;
Expand Down