About
TVM is an open-source deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance or efficiency-oriented hardware backends.
We are excited to hold a tutorial on TVM and showcase the different ways TVM can be used to facilitate research on deep learning systems, compilers, and hardware architectures. We welcome TVM contributors, potential users, collaborators, researchers, and practitioners from the broader community.
The presentations will be structured to provide a high-level overview on the research. We will complement the presentations with hands-on tutorials that can be run on your own laptop during, or after the tutorial.
Key Takeaways
- Overview of the full TVM deep learning stack from frameworks, compilers, down to hardware.
- Bringing compiler support to new deep learning accelerators.
- Leveraging Machine Learning for automated program optimization.
- Future trends in software hardware co-design.
Program
Browse the tentative program to learn more about exciting use cases for TVM. In addition, we will host “TVM Office Hours” during the breaks for those of you who have technical questions about TVM.
Follow us on Twitter
Register
You can register for the TVM tutorial at FCRC via the FCRC registration link below. Register by May 24 to receive the early pricing rate!