Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why train for quantization AFTER image representation? #19

Open
ChaosAdmStudent opened this issue Nov 7, 2024 · 0 comments
Open

Why train for quantization AFTER image representation? #19

ChaosAdmStudent opened this issue Nov 7, 2024 · 0 comments

Comments

@ChaosAdmStudent
Copy link

ChaosAdmStudent commented Nov 7, 2024

At present, the way the README states this, the quantization training script (train_quantize) is executed after train.py has already generated the gaussian model for a given image. We then run the quantization code after we have this model_path. However, train_quantize does quantize-aware training again, which involves the same gaussian projection/rasterization operations. This essentially seems like double work.

Is there a particular problem if we just run the quantized training script for gaussians starting from scratch (without stored gaussian model)?

My main problem here is that you end up training for 2*N iterations: N iters for image representation and then N for quantization-aware training. This will considerably increase encoding time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant