From SavedModel (Tensorflow)
If you’re starting with a model saved in TensorFlow’s SavedModel format, you can easily convert it to ONNX using the tf2onnx library.
We recommend this method for models trained and exported using TensorFlow 2.x, as it preserves key metadata like model signatures, which are required for the export to work.
🧰 Prerequisites
First, make sure tf2onnx is installed:
pip install tf2onnx
📦 Step-by-Step Code Example
import tf2onnx
def convert_savedmodel_to_onnx(saved_model_dir, output_onnx_path, opset=13):
"""
Converts a TensorFlow SavedModel to ONNX format.
Args:
saved_model_dir (str): Path to the SavedModel directory.
output_onnx_path (str): Path where the ONNX model will be saved.
opset (int): ONNX opset version. Default is 13.
"""
# Convert the SavedModel to ONNX
model_proto, _ = tf2onnx.convert.from_saved_model(
saved_model_dir,
opset=opset,
output_path=output_onnx_path
)
print(f"Converted to ONNX and saved to {output_onnx_path}")
# Example usage:
convert_savedmodel_to_onnx("saved_model_dir", "exported_model.onnx")
This will export the model to exported_model.onnx, ready for upload.
⚠️ Important: Make Sure Your Model Has a Signature
Your TensorFlow model must contain a serving signature — otherwise, the export will fail.
To verify that your SavedModel includes the correct signatures, you can use the following snippet:
import tensorflow as tf
model = tf.saved_model.load("saved_model_dir")
print("Available Signatures:", list(model.signatures.keys()))
Look for an entry like 'serving_default'. If it’s missing, you will need to re-export your model using the @tf.function decorator and model.save() with a defined signature.
✅ Best Practices
-
Keep input and output tensor names clean and traceable.
-
Stick with opset 13+ for compatibility with most runtimes.
-
Use simple test data after export to verify correctness.
Once your ONNX model is generated, you’re ready to upload it to your project!
Let me know if you want a version for keras_model.save() too — I can include that next.