Model Formats
ONNX vs TensorFlow.js for browser-based detection
Format Comparison
RoadAsset supports two model formats for client-side inference. Choose based on your performance requirements and browser compatibility needs.
Quick Comparison
| Feature | ONNX | TensorFlow.js |
|---|---|---|
| File Count | Single file | Multiple files (shards) |
| WebGPU Support | Yes | Yes |
| WASM Fallback | Yes | Yes |
| Inference Speed | Faster | Good |
| Model Size | Typically smaller | May include metadata |
| Export From | PyTorch, Ultralytics | TensorFlow, Keras |
ONNX Format
RecommendedOpen Neural Network Exchange (ONNX) is an open format for ML models. It provides excellent performance in browsers via ONNX Runtime Web.
Advantages
- Single file deployment - simpler to manage
- Optimized ONNX Runtime with SIMD/WebGPU acceleration
- Direct export from Ultralytics YOLO
- Smaller file sizes with quantization options
Export from Ultralytics
from ultralytics import YOLO
model = YOLO('best.pt')
model.export(format='onnx', opset=12, simplify=True)TensorFlow.js Format
TensorFlow.js is Google's ML framework for JavaScript. It provides good compatibility and a mature ecosystem for browser-based inference.
Advantages
- Native JavaScript framework with extensive documentation
- Parallel shard loading for large models
- Built-in model optimization tools
- Good for TensorFlow/Keras users
Export from Ultralytics
from ultralytics import YOLO
model = YOLO('best.pt')
model.export(format='tfjs')Note: TFJS export creates multiple files. Ensure all shard files
(group*-shard*.bin) are uploaded alongside model.json.
Our Recommendation
Use ONNX for new deployments. It offers the best performance with WebGPU acceleration, simpler deployment (single file), and excellent compatibility with YOLO models trained using Ultralytics.
Learn how to load models