Apache TVM favicon

Apache TVM
An End to End Machine Learning Compiler Framework for CPUs, GPUs and accelerators

What is Apache TVM?

Apache TVM is an open-source machine learning compiler framework. It enables machine learning engineers to optimize and run computations efficiently on any hardware backend, including CPUs, GPUs, and machine learning accelerators.

The project's vision is to create a diverse community of experts and practitioners in machine learning, compilers, and systems architecture, with the final goal of building an accessible, extensible, and automated framework. This open-source framework will optimize current and emerging machine learning models for any hardware platform.

Features

  • Compilation: Compilation of deep learning models into minimum deployable modules.
  • Automatic Optimization: Infrastructure to automatically generate and optimize models on more backend with better performance.
  • Hardware Support: Runs on CPUs, GPUs, browsers, microcontrollers, FPGAs and more.
  • Flexibility: Supports block sparsity, quantization, random forests/classical ML, memory planning, and MISRA-C compatibility.
  • Ease of Integration: Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet, and more. Multiple language support, start using TVM with Python today, and build out production stacks using C++, Rust, or Java.

Use Cases

  • Optimizing deep learning model inference on various hardware platforms.
  • Deploying machine learning models on edge devices with limited resources.
  • Developing custom machine learning accelerators and integrating them with existing software stacks.
  • Researching and prototyping new machine learning model architectures and optimization techniques.

Related Tools:

Blogs:

Didn't find tool you were looking for?

Be as detailed as possible for better results