Skip to main content

From PyTorch

Prerequisites

Ensure you have the following installed:

pip install torch onnx

Step-by-Step Code Example

import torch
from my_model import MyModel # Replace with your actual model import

# 1. Load and prepare your model
model = MyModel()
model.eval() # Important: set the model to evaluation mode for inference

# 2. Create a dummy input tensor that matches the model's expected input shape
dummy_input = torch.randn(1, 10) # Shape: [batch_size, input_dimension]

# 3. Export the model to ONNX
torch.onnx.export(
model, # The model being converted
dummy_input, # Example input to trace the model
"model.onnx", # Output file path
export_params=True, # Export trained weights
opset_version=11, # ONNX opset version (ensure compatibility with consumers)
do_constant_folding=True, # Optimize by folding constant expressions
input_names=['input'], # Name for the input tensor
output_names=['output'], # Name for the output tensor
dynamic_axes={ # Enable dynamic batch sizes
'input': {0: 'batch_size'},
'output': {0: 'batch_size'}
}
)

Notes

  • Evaluation mode (model.eval()): Ensures layers like dropout or batch norm behave correctly during export.

  • dummy_input: Used by torch.onnx.export to trace the model graph.

  • Dynamic axes: Allows the exported model to handle varying batch sizes at inference time.

  • opset_version: Make sure the target ONNX runtime supports the specified version.

Learn More

For more advanced export configurations—such as handling multiple inputs/outputs, exporting with custom operations, or debugging ONNX exports—refer to the official PyTorch ONNX documentation:

🔗 PyTorch ONNX Export Guide