SAMPL is an interdisciplinary machine learning research group exploring problems spanning multiple layers of the system stack including deep learning frameworks, specialized hardware for training and inference, new intermediate representations, differentiable programming, and various applications. We are part of the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Our group is a collaboration between researchers from Sampa, Syslab, PLSE, EFESLab and CMU Catalyst.

News

Older posts…

Research

Fiddler

CPU-GPU Orchestration for Fast Inference of MoE Models

Read more »

Punica

Serving multiple LoRA finetuned LLM as one

Read more »

Atom

Low-bit Quantization for Efficient and Accurate LLM Serving

Read more »

FlashInfer

Kernel Library for LLM Serving

Read more »

SparseTIR

Compiler for Sparsity in Deep Learning

Read more »

Dynamic Tensor Rematerialization

Checkpointing deep learning models as a dynamic analysis

Read more »

Reticle

Low-level Intermediate Representation (IR) for Programming Modern FPGAs

Read more »

Glenside

Hardware-software partition exploration with e-graphs.

Read more »

Parameter Box, Parameter Hub and Parameter Link

Parameter Server for Efficient Distributed Deep Neural Network Training for Clusters, Datacenters, and the Public Clouds

Read more »

Relay High-Level Intermediate Representation (IR)

High level IR for optimizing machine learning models.

Read more »

VTA Deep Learning Accelerator

Hardware/Software Deep Learning Acceleration Stack

Read more »

Sequential Model Specialization

Fast Video Classification via Adaptive Cascading of Deep Models

Read more »

TVM Stack

TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

Read more »

XGBoost

A Scalable Tree Boosting System

Read more »

Mission

Model, data, and computing to support learning are three pillars of machine learning. Advances in these three factors enabled breakthroughs in the past, and we believe will enable significant future advances as well. Specialized hardware architectures such as GPUs and TPU-like accelerators fuel the ever-growing computing demand for machine learning workloads. We need to address new challenges in scheduling, networking, storage and programming abstraction to build scalable systems that benefit from emerging hardware architectures and deal with ever-growing available data. Importantly, future models and learning algorithms need to be co-designed with the hardware, and system-level factors need to inform the design of hardware-software stack. We need to build common, reusable infrastructure that works across hardware backends, and use learning to make smarter systems. These challenges and research questions span multiple areas of computer science. Hence, we formed SAMPL (System, Architecture, Machine learning, and Programming language), a joint research group to conduct this-cross stack research. We focus on system, hardware and learning model co-design and build novel architectures, programming abstractions, and learning systems to enable future intelligent systems. We strive to both build tools usable by the general community as well explore new directions in machine learning systems.

People

Faculty

Luis Ceze
Professor
Tianqi Chen
Assistant Professor - CMU
Baris Kasikci
Associate Professor
Stephanie Wang
Assistant Professor
Thierry Moreau
Affiliate Assistant Professor
Matthai Philipose
Affiliate Professor
Zachary Tatlock
Associate Professor

Researchers

Past Undergraduate Students

Altan Haan Interned in 2019-2020. Now at Berkeley Ph.D. program.
Mike (Deyuan) He Interned in 2019-2021. Now at Princeton Ph.D. program.
Siyuan Feng Interned in 2019. Now at SJTU Ph.D. program.
Yulun Yao Interned in 2019. Now at Cornell Ph.D. program.
Benjamin Tu Interned in 2019. Now at UIUC MS program.
Josh Pollock Interned in 2018-2019. Now at MIT Ph.D. program.
Lianmin Zheng Interned in 2018. Now at Berkeley Ph.D. program.

Alumni

Luis Vega Ph.D., 2022, Now at Sabana Technologies (Co-founder & CEO)
Joseph McMahan Postdoc, 2021. Now at OctoML.
Meghan Cowan Ph.D., 2021. Now at Microsoft Research.
Jared Roesch Ph.D. 2020. Now at OctoML.
Eddie Yan Ph.D., 2020. Now at NVIDIA.
Liang Luo Ph.D., 2020. Now at Facebook Research.
Josh Fromm Ph.D., 2019. Now at OctoML.
Haichen Shen Ph.D., 2018. Now at AWS AI.
Seungyeop Han Ph.D., 2016. Now at Rubrik, Inc.
Steven Lyubomirsky Ph.D., 2022. Now at OctoML.
Logan Weber MS, 2020. Now at MIT Ph.D. program.
Marisa Kirisame MS, 2020. Now at Utah Ph.D. program.
Ziheng Jiang MS, 2020. Now at OctoML.

Sponsors