You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training the MeshAutoencoder, I compared ResidualLFQ and ResidualVQ.
ResidualLFQ is your default option, which can rebuild a reasonable structure.
However, when I use ResidualVQ (without changing any of your default parameters), the reconstruction results in significant errors, the validation set loss gradually increases, and the reconstruction only yields a few faces(e.g. 3 faces).
I'm not quite sure what the reason is.
if use_residual_lfq: self.quantizer = ResidualLFQ( dim = dim_codebook, num_quantizers = num_quantizers, codebook_size = codebook_size, commitment_loss_weight = 1., **rlfq_kwargs, **rq_kwargs ) else: self.quantizer = ResidualVQ( dim = dim_codebook, num_quantizers = num_quantizers, codebook_size = codebook_size, shared_codebook = True, commitment_weight = 1., stochastic_sample_codes = rvq_stochastic_sample_codes, # sample_codebook_temp = 0.1, # temperature for stochastically sampling codes, 0 would be equivalent to non-stochastic **rvq_kwargs, **rq_kwargs )
some defalt parameters:
LFQ seems not to support shared_codebook, resulting in the same index in mesh codes corresponding to different meanings. Could this affect the model's learning?
Additionally, the paper utilizes RVQ-VAE, indicating that RVQ should also possess a certain capability. However, in practical training, the performance of RVQ is very poor. Could this be an issue with the code?
When training the MeshAutoencoder, I compared ResidualLFQ and ResidualVQ.
ResidualLFQ is your default option, which can rebuild a reasonable structure.
However, when I use ResidualVQ (without changing any of your default parameters), the reconstruction results in significant errors, the validation set loss gradually increases, and the reconstruction only yields a few faces(e.g. 3 faces).
I'm not quite sure what the reason is.
if use_residual_lfq: self.quantizer = ResidualLFQ( dim = dim_codebook, num_quantizers = num_quantizers, codebook_size = codebook_size, commitment_loss_weight = 1., **rlfq_kwargs, **rq_kwargs ) else: self.quantizer = ResidualVQ( dim = dim_codebook, num_quantizers = num_quantizers, codebook_size = codebook_size, shared_codebook = True, commitment_weight = 1., stochastic_sample_codes = rvq_stochastic_sample_codes, # sample_codebook_temp = 0.1, # temperature for stochastically sampling codes, 0 would be equivalent to non-stochastic **rvq_kwargs, **rq_kwargs )
some defalt parameters:
use_residual_lfq = True, # whether to use the latest lookup-free quantization
rq_kwargs: dict = dict(
quantize_dropout = True,
quantize_dropout_cutoff_index = 1,
quantize_dropout_multiple_of = 1,
),
rvq_kwargs: dict = dict(
kmeans_init = True,
threshold_ema_dead_code = 2,
),
rlfq_kwargs: dict = dict(
frac_per_sample_entropy = 1.
),
rvq_stochastic_sample_codes = True,
loss curve:
red curve is ResidualVQ, and grey curve is ResidualLFQ.
The text was updated successfully, but these errors were encountered: