Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug report when saving QuantizationSimModel encoding files for LSTM module #3937

Open
XanxusCrypto opened this issue Mar 26, 2025 · 0 comments

Comments

@XanxusCrypto
Copy link

XanxusCrypto commented Mar 26, 2025

This is a related issue with Bug report when using QuantizationSimModel to produce the quantization model for LSTM module.

After several PTQ and QAT of the QuantizationSimModel, I used the export API to export the onnx model and encoding files.
While I got an error for the following code in _aimet_torch/base/quantsim.py:

        for name in op_names:
            if name in end_op_names:
                output_tensors.extend(op_to_io_tensor_map[name].outputs)
            else:
                intermediate_tensors.extend(op_to_io_tensor_map[name].outputs)

intermediate_tensors.extend(op_to_io_tensor_map[name].outputs)
AttributeError: 'list' object has no attribute 'outputs'

Because the intermediate output of LSTM module was list. I tried to print them and got:

        print(name)
        if isinstance(op_to_io_tensor_map[name], list):
            print(op_to_io_tensor_map[name])

rnn_sequence_block.encoder
rnn_sequence_block.encoder#1
rnn_sequence_block.encoder#2
rnn_sequence_block.encoder#3
rnn_sequence_block.encoder#2-1
rnn_sequence_block.encoder#3-1
rnn_sequence_block.encoder#4
rnn_sequence_block.encoder#4-1
rnn_sequence_block.encoder#0-1
rnn_sequence_block.encoder#6
rnn_sequence_block.encoder#7-1
rnn_sequence_block.encoder#root_node
[<aimet_torch.defs.OpToIOTensors object at 0x7f0e50c58250>, <aimet_torch.defs.OpToIOTensors object at 0x7f0e50c2b7c0>]

intermediate_tensors.extend(op_to_io_tensor_map[name].outputs)
AttributeError: 'list' object has no attribute 'outputs'

Thus I tried to work around the bug:

   for name in op_names:
            if name in end_op_names:
                output_tensors.extend(op_to_io_tensor_map[name].outputs)
            else:
                if isinstance(op_to_io_tensor_map[name], list):
                    print(f"Name list f{name}")
                    print(op_to_io_tensor_map[name])
                    for named_tensor in op_to_io_tensor_map[name]:
                        intermediate_tensors.extend(named_tensor.outputs)
                else:
                    print(f"Name f{name}")
                    print(op_to_io_tensor_map[name])
                    intermediate_tensors.extend(op_to_io_tensor_map[name].outputs)

and still got an error, which indicated that the

Param tensor {weight_ih_lo, weight_hh_l0, bias_ih_l0, bias__hh_l0} not found in the valid param set.

Hope to solve this soon!

BRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant