Paper 2024/1953
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Abstract
Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision. Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature, focusing on their slack sizes---extra bits required per secret share to ensure correctness. We present improved constructions for these truncation methods in state-of-the-art three-party semi-honest (3PC) and four-party malicious (4PC) settings, achieving up to a threefold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while maintaining the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset. We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption.
Metadata
- Available format(s)
-
PDF
- Category
- Cryptographic protocols
- Publication info
- Preprint.
- Keywords
- Fixed-point arithmeticMPCPPMLTruncationSecure Inference
- Contact author(s)
-
christopher harth-kitzerow @ tum de
ajith suresh @ tii ae
carle @ net in tum de - History
- 2025-03-18: last of 2 revisions
- 2024-12-02: received
- See all versions
- Short URL
- https://ia.cr/2024/1953
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/1953, author = {Christopher Harth-Kitzerow and Ajith Suresh and Georg Carle}, title = {Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/1953}, year = {2024}, url = {https://eprint.iacr.org/2024/1953} }