Tech Insights
ONNXRuntime

ONNXRuntime

Last updated , generated by Sumble
Explore more →

What is ONNXRuntime?

ONNX Runtime is a cross-platform, high performance scoring engine for Open Neural Network Exchange (ONNX) models. It is used to accelerate machine learning inferencing across a wide range of frameworks, operating systems, and hardware. It optimizes and runs ONNX models, improving performance and reducing latency in various applications, including cloud services, edge devices, and mobile platforms. ONNX Runtime supports both CPU and GPU execution.

What other technologies are related to ONNXRuntime?

ONNXRuntime Competitor Technologies

TensorRT is a high-performance deep learning inference SDK from NVIDIA that optimizes and deploys neural networks, offering similar functionality to ONNX Runtime.
mentioned alongside ONNXRuntime in 2% (64) of relevant job posts

ONNXRuntime Complementary Technologies

MLIR is a compiler infrastructure that can be used to optimize and lower ONNX models, making it a complementary technology.
mentioned alongside ONNXRuntime in 2% (51) of relevant job posts
PyTorch is a popular deep learning framework that can export models to ONNX, making it complementary to ONNX Runtime which can then run these models.
mentioned alongside ONNXRuntime in 0% (203) of relevant job posts
CUDA is NVIDIA's parallel computing platform and programming model. ONNX Runtime can use CUDA to accelerate inference on NVIDIA GPUs.
mentioned alongside ONNXRuntime in 0% (74) of relevant job posts

Which organizations are mentioning ONNXRuntime?

Organization
Industry
Matching Teams
Matching People
ONNXRuntime
Microsoft
Scientific and Technical Services

This tech insight summary was produced by Sumble. We provide rich account intelligence data.

On our web app, we make a lot of our data available for browsing at no cost.

We have two paid products, Sumble Signals and Sumble Enrich, that integrate with your internal sales systems.