Skip to main content

From Keras (Tensorflow)

If you’ve built your model using the Keras API (e.g. tf.keras.Model), you can export it to ONNX directly using tf2onnx. This is a good option when your model is already instantiated in memory (not saved to disk yet) and you want quick conversion.

🧰 Prerequisites

Make sure tf2onnx is installed:

pip install tf2onnx

📦 Step-by-Step Code Example

import tensorflow as tf
import tf2onnx

# 1. Build or load your Keras model
model = tf.keras.Model(inputs, outputs)

# 2. Define the input signature
spec = (tf.TensorSpec((None, 224, 224, 3), tf.float32, name="input"),)

# 3. Convert to ONNX
output_path = "exported_model.onnx"
model_proto, _ = tf2onnx.convert.from_keras(
model,
input_signature=spec,
opset=13,
output_path=output_path
)
print("Saved ONNX model to", output_path)

This will convert your in-memory Keras model into exported_model.onnx, with the input named "input" and shape [None, 224, 224, 3].


⚠️ Important: Define a Clear Input Signature

Unlike SavedModel, converting from Keras requires you to explicitly define the input signature using tf.TensorSpec. Make sure:

  • The batch dimension is None (for dynamic batching).

  • The name matches what you want your ONNX input to be called (e.g., "input").

If you don’t set the name, it may use a default value like "x" which might not be compatible with your inference runtime.


✅ Best Practices

  • Ensure your model is in evaluation mode (e.g., not using Dropout or BatchNorm in training mode).

  • Choose an appropriate opset (13+ recommended).

  • Use representative shapes in your TensorSpec to avoid unexpected errors at inference time.

Once the ONNX model is saved, you’re ready to upload it and move to the next step.

Let me know if you’d like to combine this with saving to disk (model.save(...)) before exporting.