You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At present, the way the README states this, the quantization training script (train_quantize) is executed after train.py has already generated the gaussian model for a given image. We then run the quantization code after we have this model_path. However, train_quantize does quantize-aware training again, which involves the same gaussian projection/rasterization operations. This essentially seems like double work.
Is there a particular problem if we just run the quantized training script for gaussians starting from scratch (without stored gaussian model)?
My main problem here is that you end up training for 2*N iterations: N iters for image representation and then N for quantization-aware training. This will considerably increase encoding time.
The text was updated successfully, but these errors were encountered:
At present, the way the README states this, the quantization training script (train_quantize) is executed after train.py has already generated the gaussian model for a given image. We then run the quantization code after we have this model_path. However, train_quantize does quantize-aware training again, which involves the same gaussian projection/rasterization operations. This essentially seems like double work.
Is there a particular problem if we just run the quantized training script for gaussians starting from scratch (without stored gaussian model)?
My main problem here is that you end up training for 2*N iterations: N iters for image representation and then N for quantization-aware training. This will considerably increase encoding time.
The text was updated successfully, but these errors were encountered: