SAMPL is an interdisciplinary machine learning research group exploring problems spanning multiple layers of the system stack including deep learning frameworks, specialized hardware for training and inference, new intermediate representations, differentiable programming, and various applications. We are part of the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Our group is a collboration between researchers from Sampa, Syslab, MODE, and PLSE.
Hardware/Software Deep Learning Acceleration Stack
Fast Video Classification via Adaptive Cascading of Deep Models
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
Parameter Server for Efficient Distributed Deep Neural Network Training
A Scalable Tree Boosting System