The performance of neural networks improves when more parameters are used. However, the model sizes are constrained by the available on-device memory during training and inference. Although applying techniques like quantization can alleviate the constraint, they suffer from performance degradation. In this work, we introduce NeuZip, a new weight compression scheme based on the entropy of floating-point numbers in neural networks. With NeuZip, we are able to achieve memory-efficient training and inference without sacrificing performance. Notably, we significantly reduce the memory footprint of training a Llama-3 8B model from 31GB to less than 16GB, while keeping the training dynamics fully unchanged. In inference, our method can reduce memory usage by more than half while maintaining near-lossless performance. Our code is publicly available.
Bibtex
@misc{hao2024neuzipmemoryefficienttraininginference,
title={NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks},
author={Yongchang Hao and Yanshuai Cao and Lili Mou},
year={2024},
eprint={2410.20650},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.20650},
}
Related Research
-
ClavaDDPM: Multi-relational Data Synthesis with Cluster-guided Diffusion Models
ClavaDDPM: Multi-relational Data Synthesis with Cluster-guided Diffusion Models
W. Pang, M. Shafieinejad, L. Liu, S. Hazlewood, and X. He. Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
Bayesian Neural Networks
Bayesian Neural Networks
S. Prince.
Research
-
Identifying and Addressing Delusions for Target-Directed Decision-Making
Identifying and Addressing Delusions for Target-Directed Decision-Making
M. Zhao, T. Sylvain, D. Precup, and Y. Bengio. Workshop at Conference on Neural Information Processing System (NeurIPS)
Publications