The performance of neural networks improves when more parameters are used. However, the model sizes are constrained by the available on-device memory during training and inference. Although applying techniques like quantization can alleviate the constraint, they suffer from performance degradation. In this work, we introduce NeuZip, a new weight compression scheme based on the entropy of floating-point numbers in neural networks. With NeuZip, we are able to achieve memory-efficient training and inference without sacrificing performance. Notably, we significantly reduce the memory footprint of training a Llama-3 8B model from 31GB to less than 16GB, while keeping the training dynamics fully unchanged. In inference, our method can reduce memory usage by more than half while maintaining near-lossless performance. Our code is publicly available.
Bibtex
@misc{hao2024neuzipmemoryefficienttraininginference,
title={NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks},
author={Yongchang Hao and Yanshuai Cao and Lili Mou},
year={2024},
eprint={2410.20650},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.20650},
}
Related Research
-
Unsupervised Event Outlier Detection in Continuous Time
Unsupervised Event Outlier Detection in Continuous Time
S. Nath, K. Y. C. Lui, and S. Liu. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling
C. Chen, G. Oliveira, H. Sharifi, and T. Sylvain. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection
Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection
S. Huang, J. He, and K. Y. C. Lui. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications