Skip to main content

Final Checks Before Deployment

To ensure your ONNX model runs successfully in our system, your exported model must meet two key requirements:

  1. ✅ The model must have exactly one input and one output.

  2. ✅ These must be named "input" and "output" in the ONNX file.

Even if you exported the model using a different tool or method, you can still adjust it to fit this interface. Here’s how to check and fix it:


🔍 Step 1: Check Input and Output Layer Names

Use the following code to inspect the names of your ONNX model’s input and output:

import onnx

onnx_model = onnx.load("your_model.onnx")
print("Inputs:", [i.name for i in onnx_model.graph.input])
print("Outputs:", [o.name for o in onnx_model.graph.output])

If both are already named "input" and "output", you’re good to go! ✅

If not, proceed to the next step to rename them.


✏️ Step 2: Rename Input and Output (if needed)

If your model has only one input and one output, but with different names, you can use the function below to insert identity layers and rename them safely:

import onnx
from onnx import helper

def rename_onnx_io_with_identity(onnx_path, new_path, new_input_name=None, new_output_name=None):
model = onnx.load(onnx_path)
graph = model.graph

# ----- INPUT -----
if new_input_name is not None:
old_input_name = graph.input[0].name
if old_input_name != new_input_name:
identity_in = helper.make_node(
"Identity",
inputs=[old_input_name],
outputs=[new_input_name],
name="InputAlias"
)
for node in graph.node:
node.input[:] = [new_input_name if x == old_input_name else x for x in node.input]
graph.node.insert(0, identity_in)
graph.input[0].name = new_input_name

# ----- OUTPUT -----
if new_output_name is not None:
old_output_name = graph.output[0].name
if old_output_name != new_output_name:
identity_out = helper.make_node(
"Identity",
inputs=[old_output_name],
outputs=[new_output_name],
name="OutputAlias"
)
graph.node.append(identity_out)
graph.output[0].name = new_output_name

onnx.save(model, new_path)
print(f"Saved ONNX with renamed IO as: {new_path}")

What this does:

This function renames the input and output of your ONNX model by inserting lightweight identity layers. This avoids modifying internal model logic, making it a safe way to alias input/output names.

🛠 Usage:

rename_onnx_io_with_identity(
"your_model.onnx", "your_model_renamed.onnx",
new_input_name="input", new_output_name="output"
)

Once renamed, go back to Step 1 to verify the new names.


🧪 Step 3: Test Inference with ONNX

To test the ONNX model directly, you can use this simple utility function:

import onnxruntime as ort
import numpy as np

def run_onnx_inference(onnx_path, input_array):
"""
Runs inference on an ONNX model with a given input.

Args:
onnx_path (str): Path to the ONNX model file.
input_array (np.ndarray): Input array with shape and dtype matching the model input.

Returns:
Tuple of (output array, predicted class/index)
"""
session = ort.InferenceSession(onnx_path)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name

outputs = session.run([output_name], {input_name: input_array.astype(np.float32)})
predictions = outputs[0]
predicted_class = int(np.argmax(predictions, axis=-1))

return predictions, predicted_class

🧪 Example usage:

# Dummy input (adjust shape to match your model)
input_data = np.random.randn(1, 10).astype(np.float32)

# Run the ONNX model
preds, pred_class = run_onnx_inference("your_model.onnx", input_data)

print("Predicted class:", pred_class)

✅ Step 4: Sanity Check — Compare ONNX vs Original Output

To make sure the exported ONNX model behaves the same as your original model (PyTorch, TensorFlow, etc.), compare predictions using sample input.

Here’s a simple function to compare the predictions:

import numpy as np

def compare_predictions(tf_result, onnx_result, atol=1e-5, rtol=1e-5):
"""
Compares the predictions from TensorFlow (or PyTorch) and ONNX models.
Args:
tf_result: Tuple of (predictions, predicted_class) from original model
onnx_result: Tuple of (predictions, predicted_class) from ONNX model
"""
tf_preds, tf_class = tf_result
onnx_preds, onnx_class = onnx_result

print(f"\nTF class: {tf_class}")
print(f"ONNX class: {onnx_class}")
if tf_class == onnx_class:
print("✅ Predicted classes are IDENTICAL.")
else:
print("❌ Predicted classes are DIFFERENT.")

identical = np.allclose(tf_preds, onnx_preds, atol=atol, rtol=rtol)
print(f"Full prediction arrays {'match' if identical else 'do NOT match'} (allclose with atol={atol}, rtol={rtol})")

If you’re stuck or the model still doesn’t load, feel free to reach out to our support team — we’ll help you export the model properly and ensure it runs smoothly.

Let us know if you want to validate multiple inputs or compare outputs across frameworks!