Another comprehensive repository, you might need it:awesome-3D-gaussian-splatting
📑 A theoretical of 3D Gaussian Splatting Rasterization will be added soon...
First, let's start by understanding the standard unit Gaussian function, which represents a function with a mean of
Now, we do not constrain the mean and variance and express them in a general form:
Here,
We consider
with a mean of :
and a variance of:
In this case, according to the probability density formula in probability theory, we have:
We let
And
Mathematically, the covariance matrix
At this point, the variables are independent, so only diagonal elements exist, and these elements represent the covariance between
Its determinant is:
At this point, we can simplify Equation (5) to:
Substitute this into the multivariate normal Gaussian distribution function (4) with
The authors use the product of formula (12) (PDF) and
In the 3D Gaussian Splatting paper, the authors define the multivariate Gaussian function in world space using a 3D covariance matrix (refer to Equation 12, where the paper's Equation 4 omits
In the context of the 3D Gaussian Splatting paper, this 3D Gaussian is one of the optimization targets (parameters
In order to complete the process of rasterization from 3D to 2D images after obtaining the 3D Gaussian, a transformation matrix from world to camera space, denoted as
The covariance matrix has several important properties:
- Symmetry: The covariance matrix is a symmetric matrix, meaning that the elements of the matrix are symmetric with respect to the main diagonal.
- Non-Negativity: The covariance matrix is a positive semi-definite matrix, which means that its eigenvalues are non-negative.
- The diagonal elements of the covariance matrix contain the variances of the individual variables.
- The off-diagonal elements of the covariance matrix represent the covariances between different variables, indicating the degree of correlation between different variables.
Using the full covariance matrix introduces more parameters, and during the gradient descent optimization process, it can lead to situations where the positive semi-definite condition is not satisfied. Because the covariance matrix should be positive semi-definite, it is only necessary to consider its upper diagonal elements. Therefore, the authors decompose
The process of decomposing the covariance matrix
- Calculate the eigenvalues
$\lambda_{1}, \lambda_{2}, \lambda_{3}$ and the eigenvectors$v_{1}, v_{2}, v_{3}$ of the covariance matrix$\sum$ .- Arrange the eigenvalues in descending order.
- The scaling factors are constructed from the eigenvalues
$\lambda$ , with$S = \text{diag}(\lambda_{1}, \lambda_{2}, \lambda_{3})$ , and the rotation matrix$R$ is constructed from the eigenvectors$v$ , with$R = [v_{1}, v_{2}, v_{3}]$ .
An orthogonal rotation matrix can be represented using a rotation quaternion. Assuming a quaternion
Reference《Real Time Rendering 3rd edition》:
The trace of the rotation matrix
In the 3D Gaussian Splatting paper, the calculation from scaling factors build_covariance_from_scaling_rotation()
:
def build_covariance_from_scaling_rotation(scaling, scaling_modifier, rotation):
L = build_scaling_rotation(scaling_modifier * scaling, rotation)
actual_covariance = L @ L.transpose(1, 2)
symm = strip_symmetric(actual_covariance)
return symm
Using L to represent the scaling factors:
Normalize the quaternion r to represent it as q:
In a left-handed coordinate system, the quaternion
Covariance matrix
Only the upper six elements of
Up to this point, we have learned about the relationship between the covariance matrix, scaling factors, rotation quaternion, and rotation matrix. This allows us to combine all the information in
During the backpropagation process, it is necessary to obtain the gradients of