2024
Aug
|
NanoFlow: Towards Optimal Large Language Model Serving Throughput.
Kan Zhu, Yilong Zhao, Liangyu Zhao, Gefei Zuo, Yile Gu, Dedong Xie, Yufei Gao, Qinyu Xu, Tian Tang, Zihao Ye, Keisuke Kamahori, Chien-Yu Lin, Stephanie Wang, Arvind Krishnamurthy, Baris Kasikci
arXiv preprint.
|
2024
July
|
Palu: Compressing KV-Cache with Low-Rank Projection.
Chi-Chih Chang, Wei-Cheng Lin, Chien-Yu Lin, Chong-Yan Chen, Yu-Fang Hu, Pei-Shuo Wang, Ning-Chi Huang, Luis Ceze, Kai-Chiang Wu
arXiv preprint.
|
2024
January
|
FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline.
Chien-Yu Lin, Qichen Fu, Thomas Merth, Karren Yang, Anurag Ranjan
WACV 2024.
|
2024
January
|
Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models.
Keisuke Kamahori, Yile Gu, Kan Zhu, and Baris Kasikci.
arXiv preprint.
|
2023
October
|
Punica: Multi-Tenant LoRA Serving.
Lequn Chen, Zihao Ye, Yongji Wu, Danyang Zhuo, Luis Ceze, and Arvind Krishnamurthy.
MLSys 2024.
|
2023
October
|
Atom: Low-bit Quantization for Efficient and Accurate LLM Serving.
Yilong Zhao, Chien-Yu Lin, Kan Zhu, Zihao Ye, Lequn Chen, Size Zheng, Luis Ceze, Arvind Krishnamurthy, Tianqi Chen, and Baris Kasikci.
MLSys 2024.
|
2023
October
|
Anticipatory Resource Allocation for ML Training.
Tapan Chugh, Srikanth Kandula, Arvind Krishnamurthy, Ratul Mahajan, and Ishai Menache.
Proceedings of the 2023 ACM Symposium on Cloud Computing, SoCC 2023.
|
2023
March
|
SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning.
Zihao Ye, Ruihang Lai, Junru Shao, Tianqi Chen, and Luis Ceze.
The 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
|
2022
October
|
SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks.
Chien-Yu Lin, Anish Prabhu, Thomas Merth, Sachin Mehta, Anurag Ranjan, Maxwell Horton, and Mohammad Rastegari.
ECCV 2022.
|
2022
August
|
SRIFTY: Swift and Thrifty Distributed Neural Network Training on the Cloud.
Liang Luo, Peter West, Pratyush Patel, Arvind Krishnamurthy, and Luis Ceze.
MLSys 2022.
|
2022
August
|
DietCode: Automatic Optimization for Dynamic Tensor Programs.
Bojian Zheng, Ziheng Jiang, Cody Hao Yu, Haichen Shen, Joshua Fromm, Yizhi Liu, Yida Wang, Luis Ceze, Tianqi Chen, and Gennady Pekhimenko.
MLSys 2022.
|
2021
July
|
Exploring the Memorization-Generalization Continuum in Deep Learning.
Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, and Michael C. Mozer.
ICML 2021.
|
2021
June
|
Pure, Low-Level Tensor Program Rewriting via Access Patterns (Representation Pearl).
Gus Henry Smith, Andrew Liu, Steven Lyubomirsky, Scott Davidson, Joseph McMahan, Michael B. Taylor, Luis Ceze, and Zachary Tatlock.
Proceedings of the 5th ACM SIGPLAN International Workshop on Machine Learning and Programming Languages (MAPL 2021).
|
2021
June
|
Porcupine: A Synthesizing Compiler for Vectorized Homomorphic Encryption.
Meghan Cowan, Deeksha Dangwal, Armin Alaghi, Caroline Trippel, Vincent T. Lee, and Brandon Reagen.
PLDI 2021.
|
2021
June
|
Reticle: A Virtual Machine for Programming Modern FPGAs.
Luis Vega, Joseph McMahan, Adrian Sampson, Dan Grossman, and Luis Ceze.
PLDI 2021.
|
2021
May
|
Dynamic Tensor Rematerialization.
Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, and Zachary Tatlock.
ICLR 2021.
|
2021
April
|
Accelerating SpMM Kernel with Cache-First Edge Sampling for Graph Neural Networks.
Chien-Yu Lin, Liang Luo, and Luis Ceze.
arXiv preprint.
|
2021
April
|
Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference.
Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, and Yida Wang.
MLSys 2021.
|
2021
March
|
Automated Backend-Aware Post-Training Quantization.
Ziheng Jiang, Animesh Jain, Andrew Liu, Josh Fromm, Chengqian Ma, Tianqi Chen, and Luis Ceze.
arXiv preprint.
|
2020
November
|
Srift: Swift and Thrift Cloud-Based Distributed Training.
Liang Luo, Peter West, Arvind Krishnamurthy, and Luis Ceze.
arXiv preprint.
|
2020
May
|
LastLayer: Toward Hardware and Software Continuous Integration.
Luis Vega, Jared Roesch, Joseph McMahan, and Luis Ceze.
IEEE Micro.
|
2020
March
|
PLink: Discovering and Exploiting Locality for Accelerated Distributed Training on the public Cloud.
Liang Luo, Peter West, Jacob Nelson, Arvind Krishnamurthy, and Luis Ceze.
MLSys 2020.
|
2020
March
|
Riptide: Fast End-to-End Binarized Neural Networks.
Josh Fromm, Meghan Cowan, Matthai Philipose, Luis Ceze, and Shwetak Patel.
MLSys 2020.
|
2020
February
|
Automatic Generation of High-Performance Quantized Machine Learning Kernels.
Meghan Cowan, Thierry Moreau, Tianqi Chen, James Bornholt, and Luis Ceze.
CGO 2020.
|
2019
April
|
A Hardware-Software Blueprint for Deep Learning Specialization.
Thierry Moreau, Tianqi Chen, Luis Vega, Jared Roesch, Eddie Yan, Lianmin Zheng, Josh Fromm, Ziheng Jiang, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy.
arXiv preprint.
|
2019
April
|
Relay: A High-Level IR for Deep Learning.
Jared Roesch, Steven Lyubomirsky, Marisa Kirisame, Josh Pollock, Logan Weber, Ziheng Jiang, Tianqi Chen, Thierry Moreau, and Zachary Tatlock.
arXiv preprint.
|
2018
December
|
Learning to Optimize Tensor Programs.
Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy.
NeurIPS 2018.
|
2018
November
|
Automating Generation of Low Precision Deep Learning Operators.
Meghan Cowan, Thierry Moreau, Tianqi Chen, and Luis Ceze.
arXiv preprint.
|
2018
October
|
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.
Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy.
OSDI 2018.
|
2018
October
|
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training.
Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, and Arvind Krishnamurthy.
SOCC 2018.
|
2018
June
|
Relay: A New IR for Machine Learning Frameworks.
Jared Roesch, Steven Lyubomirsky, Logan Weber, Josh Pollock, Marisa Kirisame, Tianqi Chen, and Zachary Tatlock.
Proceedings of the 2Nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages (MAPL 2018).
|
2018
February
|
MATIC: Learning Around Errors for Efficient Low-Voltage Neural Network Accelerators.
Sung Kim, Patrick Howe, Thierry Moreau, Armin Alaghi, Luis Ceze, and Visvesh Sathe.
DATE 2018.
|
2018
February
|
Parameter Box: High Performance Parameter Servers for Efficient Distributed Deep Neural Network Training.
Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, and Arvind Krishnamurthy.
SysML 2018.
|
2017
July
|
Fast Video Classification via Adaptive Cascading of Deep Models.
Haichen Shen, Seungyeop Han, Matthai Philipose, and Arvind Krishnamurthy.
CVPR 2017.
Spotlight.
|
2016
August
|
XGBoost: A Scalable Tree Boosting System.
Tianqi Chen and Carlos Guestrin.
KDD 2016.
|
2016
June
|
MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints.
Seungyeop Han, Haichen Shen, Matthai Philipose, Sharad Agarwal, Alec Wolman, and Arvind Krishnamurthy.
MobiSys 2016.
|
2015
December
|
MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems.
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang.
LearningSys Workshop at Neural Information Processing Systems 2015.
|
2015
February
|
SNNAP: Approximate Computing on Programmable SoCs via Neural Acceleration.
Thierry Moreau, Mark Wyse, Jacob Nelson, Adrian Sampson, Hadi Esmaeilzadeh, Luis Ceze, and Mark Oskin.
HPCA 2015.
|