Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large scenes and globe in QGIS 3D #301

Open
wonder-sk opened this issue Aug 4, 2024 · 8 comments
Open

Large scenes and globe in QGIS 3D #301

wonder-sk opened this issue Aug 4, 2024 · 8 comments

Comments

@wonder-sk
Copy link
Member

QGIS Enhancement: Large scenes and globe in QGIS 3D

Date 2024/08/04

Author Martin Dobias (@wonder-sk)

Contact wonder dot sk at gmail dot com

Version QGIS 3.40 / 3.42

Summary

3D scenes in QGIS are currently limited to relatively small geographic extents. The main problem is that large extents (more than ~100 km across) have issues with numerical precision of floating point numbers. These issues can be perceived through various unwanted effects:

  • objects jumping around (vertex transform numerical issues)
  • camera movement being shaky (camera's position numerical issues)
  • objects Z-fighting or not being visible (depth buffer numerical precision issues)

We will address these issues using techniques detailed in this QEP.

Moreover, we propose addition of a new type of 3D view: globe! Users will have a choice - to either have 3D scene represented as a flat plane ("local" scene), or to show data in a "globe" scene.

Proposed Solution

Large Scenes: Issues with Vertex Transforms

In QGIS 3D, single precision floats are used in vertex buffers, transforms and camera position. With precision of roughly 7 decimal digits, getting centimeter precision is not really possible for a scene larger than a few kilometers across. The solution is to use double precision floating point numbers (like we do everywhere else in QGIS), but the problem is that GPUs are generally not good friends with double precision.

There are several places where floats need to be replaced by doubles:

  1. 4x4 transform matrices applied to 3D entities (Qt3DCore::QTransform - not to be confused with QtGui::QTransform that is 3x3 matrix). We need to start using QgsMatrix4x4 that uses doubles instead.
  2. Camera representation (Qt3DRender::QCamera or Qt3DRender::QCameraLens). We will need to introduce our own camera class (QgsCamera) that would operate with doubles and remove use of QCamera from the code. We will not use QCameraSelector in the framegraph anymore.

We also should not be passing absolute coordinates of 3D geometries to vertex buffers (and thus loosing their precision when converting to floats) - but fortunately we are not doing that even now (coordinates in vertex buffers are generally small, and we provide "model" transform matrices via QTransform).

Finally, in QGIS 3D, we currently rely on Qt3D framework to initialize uniforms in shader programs (see QShaderProgram docs) - e.g. mvp and some others. We will instead calculate these matrices using double precision, especially the model-view (MV) / model-view-projection (MVP) matrix where large translation values would cause numerical issues. Only before submitting matrices to GPU, they get converted to float matrices.

On camera pose update we will calculate model-view-projection matrix for all entities on CPU. Then, all shader programs will use “our” qgis_mvp matrix instead of the mvp matrix given by Qt3D. This means that all materials used in QGIS 3D will need to be aware of this (but we are already in the process of bringing all material implementations to QGIS).

If we do not use QCamera / QCameraLens from Qt3D, some bits from Qt3D will not work anymore, such as ray casting, picking or frustum culling, but we do not use them anyway and have our own implementations, so it is not really a problem.

Here's a prototype how this approach would look like with Qt3D - without QCamera and QTransform:
https://github.com/wonder-sk/qt3d-experiments/tree/master/rtc
https://github.com/wonder-sk/qt3d-experiments/?tab=readme-ov-file#relative-to-center-rendering

Alternatives considered:

  • Use double precision instead of single precision on GPUs. In theory OpenGL 4 supports double precision (ARB_gpu_shader_fp64) for vertex buffers and uniforms, but in practice this is rarely used, and many GPUs may not support it. Some resources say doubles on GPUs are 8-32x slower than float operations. Qt3D does not support it properly (e.g. uniform double values get converted to floats).

Additional reading:

Large Scenes: Issues with Depth Buffer

Currently, we use the default setup of the depth buffer, with floating point precision. The problem is that the range of the depth buffer is not used well is the default setup: there is a lot of precision close to the near plane, but further away, there is much less precision available, to the point that one can get rendering artifacts when near and far plane are distant. The problem is best explained in NVIDIA's developer blog: Depth Precision Visualized.

There are multiple ways to solve this issue with different complexity. We have settled on the logarithmic buffer approach. The idea is that we explicitly set depth of each pixel (fragment) in the fragment shader, instead of leaving the default value after the calculation from projection matrix and perspective divide. What happens is that we set gl_FragDepth in fragment shader like this:
$$\frac{log(1+z_{eye})}{log(1+f)}$$
where $z_{eye}$ is the depth of the current fragment and $f$ is the depth of the far plane. We know $f$ from our camera settings and we calculate $z_{eye}$ in the vertex shader (and can pass it to the fragment shader easily in a uniform value). While the expression may look scary at first, there's no magic in there: we just take the depth ($z_{eye}$) and normalize it with $f$ so that it's in [0..1] range (pixels with depths greater than far plane get clipped anyway). The logarithm function is used to give more precision close to the near plane.

Modifying gl_FragDepth may cause slightly lower performance, because early depth tests (i.e. before running fragment shader) will get disabled, but this should not be a problem, we are not using some expensive fragment shaders.

Implementation of this approach means that all materials in QGIS 3D will need to have their fragment shader adjusted to set gl_FragDepth as outlined above. This approach will also need minor updates in places where we sample depth buffer (e.g. in camera controller, to know how far is the “thing” that’s below user’s mouse pointer).

Here's a prototype how this approach would look like with Qt3D - fragment shader sets gl_FragDepth to better use the range of the Z buffer range even with large near/far plane range in frustum:
https://github.com/wonder-sk/qt3d-experiments/tree/master/logdepth
https://github.com/wonder-sk/qt3d-experiments/?tab=readme-ov-file#logarithmic-depth

Alternatives considered:

  • Reverse Z technique. A nice & simple technique that reverses depth buffer (0 for far plane, 1 for near plane) and by doing that, fixing depth buffer issues by better utilizing range offered by floating point depth buffer. This was originally our first choice, as this technique does not require any changes to shader programs. Unfortunately it is not possible to use it with Qt3D's OpenGL renderer - we would need to do glClipControl() OpenGL call (to set depth range to be [0..1] instead of [-1,1]), but we cannot access the OpenGL context from QGIS 3D (it is buried deep inside Qt3D), and while it could be added to Qt3D, this would delay the implementation by at least a year (QGIS would need to fully switch to the latest Qt 6.x version).
  • Multiple frustums. This is a solution when one needs really big range of depths, but it complicates things quite a lot, and has a potential to introduce rendering issues.

Additional reading:

Globe: Refactoring of Terrain Code

Before the actual addition of globe support to QGIS 3D code, we would like to refactor terrain-related code. That code has been largely unchanged since the initial QGIS 3D release in QGIS 3.0. The following problems have been identified:

  • QgsTerrainGenerator and its sub-classes (that handle flat/raster/mesh/online) handle two different things at once - they store configuration and they act as chunk loader factories. They also deal with textures, but ideally they should only be concerned about terrain geometry.
  • The whole architecture of QgsTerrainEntity, QgsTerrainGenerator and QgsTerrainTileLoader is somehow complicated - extending it and adding globe support is a non-trivial task.

The plan to fix these issues is the following:

  • Let's have QgsAbstractTerrainSettings and subclasses (for flat terrain, raster DEM, quantized mesh, ...) - these would be plain simple configuration classes with getters, setters and XML reading/writing - similar to classes that handle material settings or light settings. The base terrain settings class should contain everything related to terrain (many terrain-related properties are now in Qgs3DMapSettings, but those should be moved to the base terrain settings class).
  • Have QgsTerrainGeometryGenerator class (conceptually similar to QgsTerrainTextureGenerator) that would asynchronously prepare QGeometry and QGeometryRenderer, together with 4x4 transform matrix for a particular tile. This class would have several subclasses - one for each terrain type (flat terrain, raster DEM, quantized mesh, …). Terrain geometry generators would also include code specific to terrain's geometry: to do ray intersection tests (e.g. for identify tool) and to sample elevation (e.g. for clamping data to terrain).
  • QgsTerrainEntity (the chunked entity for terrain) will have the implementation simplified. There will be just one “chunk loader factory” class for it, with one “chunk loader” class - they would asynchronously request texture and geometry (using QgsTerrainGeometryGenerator and QgsTerrainTextureGenerator at once, and then create the final chunk entity when both are ready).

Globe: Introduction of Globe Scene

The Qgs3DMapSettings class will get “scene type” property - either “globe” or “local” scene.

Globe scene will have various specifics (at least in the beginning):

  • it will require geocentric CRS (defaulting to EPSG:4978 which is WGS84 ellipsoid)
  • it will have no filtering with setExtent()
  • only “flat” (constant elevation) terrain would be available for globe - there would be a new terrain geometry generator for globe. Further terrain geometry generators for other terrain types (quantized mesh, raster DEM, ...) would get added in the future as needed.
  • it will not have any extra lights apart from the implicit light (to be determined: directional light from sunlight or "headlight" from camera)
  • some effects will not be available (e.g. shadows)

The world coordinates will be the same as the axes of geocentric CRS - i.e. 0,0,0 is the earth’s center, equator being on the X-Y plane, +Z is the north pole, -Z is the south pole, +X is lon=0, +Y is lon=90deg, -X is lon=180deg.

Local scene will require projected CRS (as is the case right now). Either we keep the existing (X,-Z) for the map plane, and +Y for “top”, and world’s origin at the center of the scene -or- we make world’s origin coincident with projection’s origin (which would mean there's one less offset to worry about), potentially also changing axes, so that (X,Y,Z) in 3D scene's world coordinates would correspond to (X,Y,Z) in map coordinates.

Tessellation of the Earth's terrain will be using geographic grid - each terrain tile's extent will be defined by (lon0,lat0,lon1,lat1) coordinates. Then use PROJ library to convert lat/lon to ECEF coordinates. There will be two root chunks: one for the east hemisphere (0,-90,180,90), one for the west hemisphere (-180,-90,0,90), then each of these chunks will be recursively split using quadtree approach to four child chunks. There are other ways how to handle Earth's tessellation, but this one is most straightforward when being used in a chunked implementation. This method's main weakness is at the poles, where the chunk geometry tends to create long narrow triangles (causing also texturing issues), but this is generally not a big issue (and these artifacts can be seen in other globe implementations as well).

Just like with local scene, in the globe scene it will be possible to turn off terrain entity completely - this is useful when there's a data source (e.g. Google's 3D photo-realistic tiles) that includes terrain.

Globe: Camera Control

The existing camera controller is implemented with many assumptions that the scene is in one plane, and there are various bits of functionality that may not fit well with the globe scene. We therefore suggest to start the implementation with a new camera controller (e.g. QgsGlobeCameraController), which would support basic "terrain-based" camera navigation similar to other virtual globes.

Once the globe camera controller is working, we will evaluate feasibility of further steps - whether to have an abstract base camera controller class (with an implementation for each scene type) or whether to move the globe-related code to the existing QgsCameraController, or choose some other way forward.

Risks

There are some risks involved in this:

  • performance - some code changes (especially the bits for large scene support) may cause lower rendering performance. We will be monitoring this
  • by using our own camera, transforms, model-view-projection matrices and logarithmic depth buffer, we are moving away from idiomatic Qt3D code. This is not a bad thing as such, but it may cause the 3D-related code to be less clear for newcomers.

Performance Implications

As mentioned above, introduction of the logarithmic depth buffer may slow down rendering, but this is expected to have very low / negligible impact. The relative-to-center rendering approach may also have minor effect as we will need to do double precision matrix calculations on visible tiles, but this is again considered to be small amount of extra work per frame.

Backwards Compatibility

These changes should be fully backward compatible. If we end up changing how the coordinate system of the local scenes is set up, there could be in theory some minor incompatibilities between older/newer QGIS project files.

Thanks

Special thanks to Kevin Ring from Cesium and to Mike Krus from KDAB for their useful insights.

@nyalldawson
Copy link
Contributor

+1
(These have been extensively pre-reviewed during brainstorming sessions and I'm also happy with the described approach.)

nyalldawson added a commit to nyalldawson/QGIS that referenced this issue Aug 15, 2024
Create a small, cheap to copy (non-qobject) class
Qgs3DMapSettingsSnapshot which is designed to store
just cheap properties of Qgs3DMapSettings. Then use this
object wherever possible to avoid accessing the (non-thread
safe) Qgs3DMapSettings object for retrieval of simple
map properties (eg crs, extent, ...)

Refs qgis/QGIS-Enhancement-Proposals#301
nyalldawson added a commit to nyalldawson/QGIS that referenced this issue Aug 16, 2024
Create a small, cheap to copy (non-qobject) class
Qgs3DMapSettingsSnapshot which is designed to store
just cheap properties of Qgs3DMapSettings. Then use this
object wherever possible to avoid accessing the (non-thread
safe) Qgs3DMapSettings object for retrieval of simple
map properties (eg crs, extent, ...)

Refs qgis/QGIS-Enhancement-Proposals#301
nyalldawson added a commit to nyalldawson/QGIS that referenced this issue Aug 26, 2024
Create a small, cheap to copy (non-qobject) class
Qgs3DMapSettingsSnapshot which is designed to store
just cheap properties of Qgs3DMapSettings. Then use this
object wherever possible to avoid accessing the (non-thread
safe) Qgs3DMapSettings object for retrieval of simple
map properties (eg crs, extent, ...)

Refs qgis/QGIS-Enhancement-Proposals#301
nyalldawson added a commit to qgis/QGIS that referenced this issue Aug 28, 2024
Create a small, cheap to copy (non-qobject) class
Qgs3DMapSettingsSnapshot which is designed to store
just cheap properties of Qgs3DMapSettings. Then use this
object wherever possible to avoid accessing the (non-thread
safe) Qgs3DMapSettings object for retrieval of simple
map properties (eg crs, extent, ...)

Refs qgis/QGIS-Enhancement-Proposals#301
@benoitdm-oslandia
Copy link

Very nice idea! I am wondering about memory and data reprojection (f.e. precision loss) issues but I am eager to see the first PRs!

@vpicavet
Copy link
Member

+1 (These have been extensively pre-reviewed during brainstorming sessions and I'm also happy with the described approach.)

I wish these brainstorming sessions would have been public and open for other major 3D contributors :-/

@autra
Copy link

autra commented Oct 17, 2024

This is interesting! I can share some of our experience from the web side (with giro3d, which may or may not apply to you):

  • for the precision problem, the most important thing to do was indeed to use double precision matrices. It is important to precalculate the modelViewMatrix cpu-side indeed. It is not important to include the projectionMatrix (three.js does it shader side for instance). I think it is also important that your 3D objects are correctly offseted (the vertices themselves should not be big). If this is done, from my experience, the rest is not a big deal, you can keep float32 vertices. So that means you might not need to change your camera representation. Why did you think you needed to change it? Do they carry more than just their position and orientation?
  • for the depth buffer precision problem: in webgl, the logarithmic depth buffer wasn't enough to decrease z-fighting in a satisfactory way. What helped a lot better is something you haven't considered yet: use adaptative values for near/far plane for the camera. It avoids having constantly the near plane at 0.5 and the far plane at a very big value. It's a bit more (coding) work though, because each entity/objects needs to report their min/max approximative distance to the camera each frame. In practice we don't see a big differences with logarithmic depth buffer, but we do see big improvements with this. You should consider it instead (or in addition, as they are not mutually exclusive)! Here is the relevant code on giro3d

One note about your sources : I was told that the book "3D Engine Design for Virtual Globes" might be out-of-date sometimes. After all, it has been written nearly 15 years ago, and GPUs and state of the art has evolved since.... (I have never read it entirely, so I can't tell you exactly where ;-)). It might be a good idea to challenge these technics with more recent books or papers.

Of course, this is our experience from the js / webgl world POV. QGis is a desktop application and the context is quite different:

  • it has easier access to the full opengl capabilities (especially, we only have access to vertex and fragment shaders in webgl, which is limiting)
  • it has easier access to multiple threads (in js, workers do have a cost, especially to transmit data in and out of them)
  • so it will tend to have more data displayed, especially because you can load stuff faster from disk than we can from the network

For these reasons, the performances bottlenecks might be different for you than it is for us. But overall, if something works in js performance-wise, it should work better in C++ :-)

I'd be more than happy to provide pointers to our code, especially our rendering loop, shader, and which technics we use.

Especially, we have actually started to implement a globe mode which already works quite well, and we support a lot of different data types and rendering modes that can be of interest to you.

I can't wait to test that in QGis :-)

@autra
Copy link

autra commented Oct 17, 2024

One additional remark: I'm not familiar with the QEP process and granularity, but each of your "proposed solutions" could (and should imo) be implemented relatively independently. Most of these technics are not dependent of the type of scene (globe or planar), so I'd decorrelate the 2 aspects completely.

I admit it's my selfish point of view, I'm a lot more interested in improving the experience with the planar view than a globe view in qgis (as I work very rarely with non-projected data).

@wonder-sk
Copy link
Member Author

@autra thanks for your comments

that means you might not need to change your camera representation. Why did you think you needed to change it? Do they carry more than just their position and orientation?

Because the QCamera in Qt3D only works with float32 coordinates, and MV, MVP matrices get calculated with float32...

for the depth buffer precision problem: in webgl, the logarithmic depth buffer wasn't enough to decrease z-fighting in a satisfactory way

Can you expand on that? I have done some prototyping (see the logdepth qt3d experiment) and that showed very good depth buffer precision increase.

What helped a lot better is something you haven't considered yet: use adaptative values for near/far plane for the camera.

That's actually what we're doing right now 😉 But it's annoying, it breaks sometimes, and it still has the fundamental problem with precision of the default depth buffer setup.

One note about your sources : I was told that the book "3D Engine Design for Virtual Globes" might be out-of-date sometimes. After all, it has been written nearly 15 years ago, and GPUs and state of the art has evolved since

For sure some things may get outdated... I was checking the relevant parts with one of the authors - but happy to hear if there are better/newer techniques for the relevant bits 🙂

Especially, we have actually started to implement a globe mode which already works quite well, and we support a lot of different data types and rendering modes that can be of interest to you.

good luck getting globe sorted in your project!

I'm not familiar with the QEP process and granularity, but each of your "proposed solutions" could (and should imo) be implemented relatively independently. Most of these technics are not dependent of the type of scene (globe or planar), so I'd decorrelate the 2 aspects completely.

That's the plan!

@autra
Copy link

autra commented Oct 18, 2024

Because the QCamera in Qt3D only works with float32 coordinates, and MV, MVP matrices get calculated with float32...

Ok, yes, because you have a projection matrix there. I'd be curious to know if you have tested with the regular QCamera, just to see if the projection matrix alone can stay 32 bits? (that being said, there might be a typing issue there in c++ that we don't have in js, I'm too ignorant in c++ to be certain of that).

Can you expand on that? I have done some prototyping (see the logdepth qt3d experiment) and that showed very good depth buffer precision increase.

It certainly increases its precision, yes, but sometimes not enough for us, for instance when geometries crosses each other (the case we had: a terrain with cave roofs, some of them very near the terrain).

I played a bit with your example and couldn't trigger bad z-fighting, so maybe you'll get away with this :-) Our envs are different enough that we might not have the same issues (and time have passed, we don't have the same GPU etc...). There are key differences between your example vs real life though (coordinates will be bigger, camera will be farther etc...) though, it may or may not be a problem in practice.

@wonder-sk
Copy link
Member Author

Ok, yes, because you have a projection matrix there. I'd be curious to know if you have tested with the regular QCamera, just to see if the projection matrix alone can stay 32 bits?

Projection matrix alone can certainly stay as float32, but the problem is that with QCamera also model and view matrices are handled with float32 and there's no way around that...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants