TVM and Deep Learning Compiler Conference

Wed, December 12th 2018, Seattle. Intellectual House @ UW.


About

TVM is an open-source deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends.

We are excited to hold a conference on the state of the art of deep learning compilation optimization. We welcome TVM contributors, potential users, UW SAMPL sponsors, collaborators and researchers and practitioners from the broader community. The conference will discuss recent advances in frameworks, compilers, systems and architecture support, security, training and hardware acceleration.

The goal of the conference is to discuss the state of the art in deep learning compilation, hear about TVMs latest developments, discuss future ideas and most importantly, meet other people interested in deep learning compilation and grow the TVM community.


Program

Thank you for attending TVM Conference! Presentation slides are now available. Video recordings will be available soon.

Time  
9:00 Keynote – SAMPL, Apple, Amazon, Huawei. Slides
10:15 TVM Stack Overview – Tianqi Chen, UW. Slides
10:45 Deep Learning Compilation at Amazon – Yida Wang, Amazon. Slides
11:05 break
11:25 AutoTVM & Device Fleet – Eddie Yan, UW. Slides
11:45 VTA Open & Flexible Deep Learning Accelerator – Thierry Moreau, UW. Slides
12:05 Fast & Faster Privacy-Preserving ML in Secure Hardware Enclaves – Nick Hynes, UC Berkeley/Oasis Labs. Slides
12:20 Lunch (boxed lunches will be provided)
13:30 Spatial: A Language and Compiler for Application Accelerators – Kunle Olukotun/Raghu Prabhakar, Stanford & SambaNova. Slides
13:50 Machine Programming – Justin Gottschlich, Intel. Slides
14:10 PlaidML Stripe: Polyhedral IR + Model-guided Optimization – Brian Retford, Intel. Slides
14:25 Relay: a high level differentiable IR – Jared Roesch, UW. Slides
14:45 Scalable Distributed Training with Parameter Hub: A Whirlwind Tour – Liang Luo, UW. Slides
15:05 The HammerBlade: An ML-Optimized Supercomputer for ML and Graphs – Michael Taylor, UW. Slides
15:20 break, contributors meetup
15:50 TVM @ FB – Andrew Tulloch, Facebook. Slides
16:10 Inference Architectures @ Xilinx – Graham Schelle, Xilinx. Slides
16:30 Lightning talks session
  Efficient Voice Activity Detection via Binarized Neural Networks – Matthai Philipose, Microsoft. Slides
  Heterogenous Bitwidth Binarization: Weird Operators with Big Benefits – Josh Fromm, UW. Slides
  Generating Fast Operators for Binarizable Networks – Meghan Cowan, UW. Slides
  OpenCL Backend for FPGA – Morita Kazutaka, NTT, Japan. Slides
  Build Your Own VTA Design with Chisel – Luis Vega, UW. Slides
  µTVM: Deep Learning on Bare-Metal Devices – Pratyush Patel, UW. Slides
  Supporting TVM on RISC-V Architectures – Jenq-Kuen Lee, NTHU, Taiwan. Slides
  Bring Your Own Datatypes – Gus Smith, UW. Slides
  AutoScheduler for TVM – Lianmin Zheng, SJTU. Slides
  Hybrid Script: A Text Format for Halide IR & A Python-TVM Hybrid Frontend – Jian Weng, UCLA. Slides
  Automatic Quantization for TVM – Ziheng Jiang, UW. Slides
  Data Visualization with Vega-Lite and Altair – Dominik Moritz , UW. Slides
  TVM on Hexagon DSP – Krzysztof Parzyszek , Qualcomm. Slides
  Sharing, Protection, and Compatibility for Reconfigurable Fabric with AmorphOS – Ahmed Khawaja, UT Austin. Slides
17:35 TVM & the Apache Software Foundation – Markus Weimer, Microsoft and Apache Software Foundation. Slides
18:15 to 20:00 Social (drinks, food)

Follow us on Twitter


Register

Note that if you pre-registered, you do not need to register again.


Click to Register


Hotels

Here is a list of hotels that are close to the UW campus.