Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Export 'ONNX' Model? #24

Open
fspanda opened this issue Dec 19, 2022 · 1 comment
Open

How to Export 'ONNX' Model? #24

fspanda opened this issue Dec 19, 2022 · 1 comment

Comments

@fspanda
Copy link

fspanda commented Dec 19, 2022

Hi. Thanks to this code, I was able to write a multi-label classification model well.
By the way, can you tell me how to export the model made like this using torch.onnx? An error occurred when I used the normal torch.onnx.export method.

My code :

test_comment="hello"  
encoding = tokenizer.encode_plus(
    test_comment,
    add_special_tokens=True,
    max_length=63,
    return_token_type_ids=False,
    padding="max_length",
    return_attention_mask=True,
    return_tensors='pt',
  )



torch.onnx.export(trained_model,
                    (encoding["input_ids"], encoding["attention_mask"]),
                    'model.onnx',
                    export_params=True,
                    do_constant_folding=True,
                    opset_version=11,
                    input_names=['input_ids', 'attention_mask'],
                    output_names=['output'],
)

error :


RuntimeError                              Traceback (most recent call last)
<ipython-input-49-9c2e1e064898> in <module>
----> 1 torch.onnx.export(trained_model,
      2                     (encoding["input_ids"], encoding["attention_mask"]),
      3                     'model.onnx',
      4                     export_params=True,
      5                     do_constant_folding=True,

~/anaconda3/envs/myenv1/lib/python3.8/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
    273 
    274     from torch.onnx import utils
--> 275     return utils.export(model, args, f, export_params, verbose, training,
    276                         input_names, output_names, aten, export_raw_ir,
    277                         operator_export_type, opset_version, _retain_param_name,

~/anaconda3/envs/myenv1/lib/python3.8/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
     86         else:
     87             operator_export_type = OperatorExportTypes.ONNX
---> 88     _export(model, args, f, export_params, verbose, training, input_names, output_names,
     89             operator_export_type=operator_export_type, opset_version=opset_version,
     90             _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,

~/anaconda3/envs/myenv1/lib/python3.8/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference)
    687 
...
    128             wrapper,
    129             in_vars + module_state,

RuntimeError: output 1 (0
[ CPULongType{} ]) of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.

Thank you in advance for your reply.

@Siddharth-Latthe-07
Copy link

possible solution for the above issue:-

  1. Ensure the Model is in Evaluation Mode: Before exporting, the model should be in evaluation mode.
  2. Create a Wrapper for the Model: Sometimes, creating a wrapper class to handle the input processing can help.
  3. Handle Tokenizer Outputs Correctly: Ensure that the inputs to the model are properly formatted and fed into the model.

sample code snippet which might help you,

import torch
import torch.onnx
from transformers import BertTokenizer, BertModel

class ModelWrapper(torch.nn.Module):
    def __init__(self, model):
        super(ModelWrapper, self).__init__()
        self.model = model

    def forward(self, input_ids, attention_mask):
        outputs = self.model(input_ids=input_ids, attention_mask=attention_mask)
        return outputs.last_hidden_state

# Assuming `trained_model` is your trained BERT model
trained_model.eval()  # Ensure the model is in evaluation mode

# Create an instance of the wrapper
wrapped_model = ModelWrapper(trained_model)

# Example input
test_comment = "hello"
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

encoding = tokenizer.encode_plus(
    test_comment,
    add_special_tokens=True,
    max_length=63,
    return_token_type_ids=False,
    padding="max_length",
    return_attention_mask=True,
    return_tensors='pt',
)

input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]

# Export the model
torch.onnx.export(
    wrapped_model,
    (input_ids, attention_mask),
    'model.onnx',
    export_params=True,
    do_constant_folding=True,
    opset_version=11,
    input_names=['input_ids', 'attention_mask'],
    output_names=['output'],
    dynamic_axes={
        'input_ids': {0: 'batch_size', 1: 'sequence_length'},
        'attention_mask': {0: 'batch_size', 1: 'sequence_length'},
        'output': {0: 'batch_size', 1: 'sequence_length'}
    }
)

hope this helps,
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants