-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[UsdLux] Add explanation of UsdLux quantities and behavior #3182
base: dev
Are you sure you want to change the base?
[UsdLux] Add explanation of UsdLux quantities and behavior #3182
Conversation
Adds more detailed and precise explanations of expected behavior for UsdLux lights to UsdLux docs.
@@ -435,6 +616,15 @@ class UsdLuxLightAPI : public UsdAPISchemaBase | |||
/// enableColorTemperature is set to true. When active, the | |||
/// computed result multiplies against the color attribute. | |||
/// See UsdLuxBlackbodyTemperatureAsRgb(). | |||
/// | |||
/// This is always calculated as an RGB color using a D65 white point, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The property should have an associated color space, explicitly restricted to one of the canonical color spaces described in the GfColorSpace enumeration. It is then the responsibility of the user to use the facilities in GfColor/ColorSpace to resolve the value into an rgb triple. [Edit] the following has no real bearing on this discussion. The Color Interop standard, to which GfColorSpace adheres, has introduced a CIEXY space which would be a good choice for the default color space associated with the color temperature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comments / questions here: #3182 (comment)
exposure via `UsdGeomCamera::GetExposureAttr()` and it is expected that all renderers | ||
implementing UsdLux must respect the value of that attribute by scaling the rendered | ||
image by `pow(2, exposure)`. | ||
|
||
\subsection usdLux_Behavior Properties & Behavior | ||
|
||
Colors specified in attributes are in energy-linear terms, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section is now obsolete. The colorspace metadata must conform to the available colorimetry defined by GfColor/ColorSpace. Further the UsdStage::SetColorConfiguration() does not have any bearing on rendering. Within the rendering domain, colorimetry is strictly reduced to that specified by ocio/nanocolor as implemented by gfColor*. The mention of SetColorConfiguration should be removed in this context, as it is used to set color configurations governed by OCIO, such as the viewing transform. Color configurations, explicitly can not influence the colorimetry defined normatively by the ocio/nanocolor.
This section should reflect that by pointing to gfcolor*, ocio/nano, and so on. Perhaps we could workshop it a bit here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you saying that the attribute colorSpace metadata should apply, but the stage color configuration should not? If so, what should the default colorSpace be when an attribute doesn't explicitly specify?
Also - this implies that we're now creating two classes of color attributes - those to which the stage level SetColorConfiguration applies, and those to which it doesn't. I feel that there should be some programmatic way to query this, rather than relying on an end user knowing which attributes it applies to and those to which it doesn't.
(Though I don't necessarily think we should gate these changes until the presence of such an API - just that they should be on the near-term roadmap.)
Also - just wanted to make clear that I'm not trying to argue against this approach, just pointing out some resulting considerations, and trying to get additional clarification!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UsdStage::SetColorConfiguration doesn't apply to the rendering color space at all, it refers to the viewing transform that is applied to the final framebuffer.
So we're not creating two classes of color attributes at all.
As to the default color space to be used for scene referred colors, that's a topic for a new color space schema, not yet published. In the mean time, the default is what it is today, which is linear with the primaries and white point of rec bt.709. When the new schema is published, the default, in absence of any other indication, will still be that. So there won't be an issue with backwards compatibility.
Just to add a general point, thank you so much for taking such a detailed approach to the documentation, and for bringing the concerns of spectral rendering to a clear focus for readers of this documentation. |
And just to make sure that credit is directed appropriately - I'm just the caretaker/window-dresser for this PR; the true meaty content is almost all thanks to @anderslanglands's original PR: #2758... And that goes doubly true for all the spectral rendering content, where my knowledge is fairly limited!. 🙏 @anderslanglands |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
/// In the case of a spectral renderer, this color should be uplifted such | ||
/// that it round-trips to within the limit of numerical accuracy under the | ||
/// rendering illuminant. We recommend the use of a rendering color space | ||
/// well defined in terms of a Illuminant D illuminant, to avoid unspecified |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we wanna go further and just say D50 and D65 to be in line with common practice? The canonicals include D55 and D75 as well, but I'm not aware that they're used anywhere that might also use UsdLux. The canonical illumininants are magical because the use the newer value of Planck's constant. If we just say a "D" illuminant, it makes me think we might recommend 1931 D illuminants, in addition to the four "magic" illuminants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that seems reasonable
/// sampled value is then potentially normalized by the overal power of the | ||
/// profile if shaping:ies:normalize is enabled, and then used as a scaling | ||
/// factor on the returned luminance: | ||
/// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably also mention that IES profile's photometry may or may not be in relative to a benchmark "bare bulb" (that they had on hand! and that we cannot characterize!), and so we expect that IES profiles should be considered to require scaling of some sort in any case, whether or not we are also imposing normalization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're actually going to need to revisit this. We're getting into the weeds with this internally here and the correct handling for converting from intensities in the IES to exitant luminance from an area source is more involved than what's being done in any renderer currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I wrote that, I was trying to correlate the IES specification to what we're saying here. I had been thinking of the intensity pattern as if it is a modulating the emittance from a small spherical radiator. Since the pattern is measured from a goniophotometer, the measurement itself pretends that the luminaire is a sphere; so basically a sphere large enough to have a surface area that can be measured, but small enough that it can treated like a point light in terms of how the light looks like from two different viewpoints, ie, not considering a difference in irradiance across the span of the luminaire... When you say "from an area source", that's very interesting. Whenever I look at Annex D, I always wonder if the height, width, and length they talk about are angular or linear. Since no units are mentioned (that I can find), I've always assumed angular, and assumed that there's no way to guess how big a luminaire is from an IES file. I'm very curious to learn how you're thinking about this, and if there is a clear explanation in LM-63-19 and I just haven't spotted it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah so basically units in the IES are candela by definition. To compute the correct exitant luminance one needs to divide that by area of the source along the exitant direction. For a sphere this is always
I’m currently pondering what the best approach is here. IES files are only really valid for point lights by their nature. For spheres and disks we can do something reasonable and simple as noted above. For other shapes do we try and be exact (cylinders seem like they’d be particularly hard/expensive to handle)? Or do we just make some simple approximation and note that the output is not expected to be correct for those shapes. The latter seems reasonable to me but haven’t thought it through fully yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be keen to know your thoughts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see where you are going with that, I think. I wonder if it might be easiest to think about by taking a step beyond a sphere, by considering one face of a 10cm x 10cm x 10cm cube, with a point emitter at 0,0,0. A disc projected from the emitter to the face would have an area of 100 cm^2. We could then just treat the face as a disc light using L = I / area
. To finish that thought, we could treat the whole ideal cube luminaire as six of those.
That wouldn't be a bad approximation, I bet.
If we thought about an icosahedron, with 20 disc lights, the approximation is better.
Following that train of thought, does this reduce to a differential form, that we could plug the small set of regular IES shapes into? I think there's sphere, ellipsoid, box, cylinder, and not much else?
For a cylinder it might be $dA = _frac{ dI } { r d\theta dz cos(\theta ') $ on the walls, and a plain old disk for the top and bottom.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear I'm talking about the shapes of the usdlux lights, so sphere, disk, rect, cylinder
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
Internal issue number for reference: USD-9900 |
<center><b> | ||
sizeFactor<sub>distant</sub> = | ||
(2 - sin²𝛳<sub>max</sub>) ⋅ 𝜋 | ||
</b></center> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious if you have a link to the derivation of this?
If I compute a normalization factor as a fraction of solid angle coverage I get a different answer: https://www.desmos.com/calculator/coa7luomx3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the integral over projected solid angle. See for example page 11 here: https://www.cs.cornell.edu/courses/cs5625/2022sp/notes/radiometry-notes.pdf
One way of looking at this is it expresses the light in units of irradiance, which is useful. A simpler way to look at it is "the light doesn't get dimmer when you make the angle bigger".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation @anderslanglands, maybe this should be in the documentation for future reference?
Just a few follow up questions if I may. This normalization is defined for surfaces, does this mean a volume lit with a normalized distant light will not stay the same brightness for different sizes? Should there be a different normalization technique for volumes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, at one point I wrote this explanation up, before axing it because it was too wordy:
(There's also a .md version, but it's formatted for Doxygen's markdown flavor, not github's, so not all formating works 100%.)
/// | ||
/// * <i>if 0 < 𝛳<sub>max</sub> ≤ 𝜋 / 2:</i> | ||
/// <center><b> | ||
/// sizeFactor<sub>distant</sub> = sin²𝛳<sub>max</sub> ⋅ 𝜋 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the dumb questions, but how does this formulation relate to e.g. if you pretend a distant light is a cone section of a sphere? If you normalize by computing the solid angle of the light vs the solid angle of a light with angle 0.53, the solid angle is 2 * pi * (1 - cos(theta)), or equivalently, 4 * pi * sqr(sin(0.5 * theta)).
https://en.wikipedia.org/wiki/Solid_angle#Cone,_spherical_cap,_hemisphere
... the math looks similar but I'm having trouble drawing the correspondences. Can you explain what's going on here geometrically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the link I posted in response to Curtis, above (page 11). This is defining it in terms of projected solid angle instead of solid angle. This has two nice properties:
- The light doesn't change brightness when you make the angle bigger (which is the whole point of normalize), see renders 21-30 here comparing first and second columns. First is this normalization, second is Karma, which does solid angle normalization.
- This makes the light's brightness be defined in units of illuminance, which means we now have a way to ascribe a SI unit to the light.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a geometrical derivation of this formula, see "Property 2" in this explanation I wrote up at one point (because I had the same question):
(There's also a .md version, but it's formatted for Doxygen's markdown flavor, not github's, so not all formating works 100%.)
/// Angular limit off the primary axis to restrict the | ||
/// light spread. | ||
/// Angular limit off the primary axis to restrict the light | ||
/// spread, in degrees. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this has been rewritten since I last took a look at the docs/test images. Do you know where this stands w.r.t. Renderman conformance at the moment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same for coneAngle < 90 degrees. Renderman seems to cap the value at 90 degrees - if coneAngle >= 90 degrees, it behaves as though comeAngle = 180 / coneAngle is disabled.
Description of Change(s)
Adds more detailed and precise explanations of expected behavior for UsdLux lights to UsdLux docs.
Related PRs:
⛓️ hdEmbree Reference Implementation:
This PR is documentation changes only; see here for the PR chain that provides a reference implementation as part of hdEmbree:
✳️ All UsdLux update related PRs:
To see ALL UsdLux update related PRs (documentation AND reference implementation) in one place, see:
⌛ Original PR:
These PRs are an update of this original PR:
But... why??
Why make these PRs in the first place?
The current specifications of the various UsdLux prims + attributes are imprecise or vague in many places, and as a result, actual implementations of them by various renderers have diverged, sometimes quite significantly. For instance, here is Intel's 4004 Moore Lane scene, with the same UsdLux lights defined, in 3 different renderers:
Karma:
Arnold:
Omniverse RTX:
For a full descpription of the problem, see here:
Why so many PRs?
This was my attempt to break up a rather large change into smaller, more easily reviewable changes, that can be merged in incrementally, to help ease the burden for code reviewers.
If you find this confusing, and would rather just one big PR (ie, just this one), or have them organized in some other way, please let me know!
Why are the documentation changes in their own PR?
In some ways, the documentation changes are the heart of this effort - we wish to specify more exactly what the various UsdLux prims and attributes represent. However, building consensus on this may take time - so we expect some dialogue on the exact language or formulas.
Because that may take time, the hdEmbree reference implementation changes are separate, and broken up, so we can hopefully start integrating portions of them even before final consensus has been reached on the final form of the specification.
Why are the documentation changes not broken up into smaller pieces, like the hdEmbree reference implementation changes?
Because I wasn't sure if that would be desirable or not! If people think that would be helpful, I can do so - perhaps breaking out by schema or individual attribute?