You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: content/30.abstracting_simulation_types.md
+6-2
Original file line number
Diff line number
Diff line change
@@ -202,10 +202,14 @@ However, with SPH data, the selection methods in 2D and 3D will always include t
202
202
This is somewhat counter to the expectations set by the grid codes, but aligns with the need to have a fully self-contained data-container for computing field values.
203
203
For instance, this means that a "ray" object (often used to compute, for instance, the column density in a cosmological simulation) will in fact include a set of particles within a (potentially) varying impact parameter.
204
204
This can be seen in diagram form in Figure @fig:sph_ray_tracing.
205
+
We note that, as described in the SPLASH method paper [@doi:10.1071/AS07022], the kernel interpolation can be computed using the (dimensionless) ratio between the impact parameter of the ray and the smoothing length of the particle.
A cartoon diagram of a ray passing through a collection of particles. The radius of the particle is indicative of its smoothing length (values should *not* be interpreted to be constant within these circles!). As can be seen, the individual particles each contribute different amounts as a result of their smoothing length, the chord-length as the ray passes through the circle, and the values within each particle.
A cartoon diagram of a ray passing through a collection of particles. The radius of the particle is indicative of its smoothing length. As can be seen, the individual particles each contribute different amounts as a result of their smoothing length, the chord-length as the ray passes through the circle, and the values within each particle.
Other than these differences, which have been intentionally made to align the results with the expected results from the underlying discretization method, the APIs for access to particle data and finite volume data are identical, and they provide broadly identical functionality, where the disparities are typically in functionality such as volume rendering.
211
215
This allows a single analysis script, or package (such as Trident), to utilize a high-level API to address both gridded and Lagrangian data while still ensuring that the results are high-fidelity and representative of the underlying methods.
Copy file name to clipboardexpand all lines: content/35.viz_and_volrendering.md
+20-1
Original file line number
Diff line number
Diff line change
@@ -168,4 +168,23 @@ This particular set of transfer functions, with what amounts to multi-channel, m
168
168
169
169
### Hardware-accelerated Volume Rendering
170
170
171
-
171
+
Software volume rendering, as described above, provides a number of affordances for careful visualization.
172
+
Specifically, as the code and kernels are written in languages that are similar to more traditional languages such as C and C++, the barrier to entry for describing a new sampling system can be lower.
173
+
That being said, when examining responsiveness, software volume rendering is rarely -- if ever -- competitive with hardware-based volume rendering, such as that accelerated through graphics processing units (GPUs) and using OpenGL, Metal, Vulkan, or one of the higher-level platforms for graphics.
174
+
Fluid interactivity is essentially inaccessible for software volume rendering except on the smallest of datasets.
175
+
And yet, fluid interactivity enables much deeper exploration of the parameter space of visualization, as well as the ability to immerse oneself in data.
176
+
177
+
To support more interactive visualization (in addition to that described in @sec:jupyter_integration) a basic system for conducting hardware-accelerate, OpenGL-based visualization of `yt`-supported data has been developed in an external package, [`yt_idv`](https://github.com/yt-project/yt_idv/).
178
+
`yt_idv` is built on `PyOpenGL` and provides support for grid-based, particle-based and finite element datasets, with a heavy emphasis on the grid-based datasets.
179
+
While the process of volume rendering is interesting, with many different fascinating areas of inquiry and opportunities for optimization, `yt_idv` is notable for its architecture more than its algorithms and optimization.
180
+
181
+
`yt_idv` is built using the Traitlets library, and utilizes the immediate-mode graphical user interface system "Dear ImGUI" for presenting a user interface.
182
+
This allows for the system to be largely data-driven; frame redraws are only executed when a parameter changes (such as the camera path, transfer function, etc) and new parameters for the visualization can be easily exposed to the user interface.
183
+
Furthermore, this allows shaders to be dynamically recompiled only as necessary.
184
+
185
+
In addition to this, the shaders themselves are specified in a configuration file that links GLSL source files into multi-component shaders, allowing declarative construction of shader pipelines and improving interoperability.
186
+
For ray-casting routines, only a small kernel needs to be modified to change the functionality.
187
+
This allows the relatively complex process of casting rays through multiple volumes to be hidden, much as described in @sec:software-volume-rendering.
188
+
Annotations can be added simply, and an event loop has been enabled so that users can interact with the running visualization interface through IPython.
189
+
190
+
While this project includes a number of additional features designed for accessibility of data and in-depth coupling of visualization with quantitative analysis, they extend beyond the scope of this paper.
0 commit comments