Skip to content

Hugging Face Hit by Novel Malware Distribution Technique

Malicious models slipped through Hugging Face's security net. The platform has since updated its detection tools and implemented stricter measures.

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

Hugging Face Hit by Novel Malware Distribution Technique

Researchers have uncovered a novel malware distribution technique targeting Hugging Face, a popular platform for machine learning models. Two malicious models were discovered, exploiting a vulnerability in Pickle file serialization.

The malicious models were cleverly disguised, stored in PyTorch format but compressed using the 7z format instead of ZIP, bypassing Hugging Face's default loading function and Picklescan detection. Upon closer inspection, it was found that the models used broken Pickle files, suggesting they were proof-of-concept models for testing a new attack method.

Hugging Face's Picklescan tool, designed to detect malicious models, has shortcomings in identifying threats in broken Pickle files. It relies on a blacklist of 'dangerous' functions, which is not scalable or adaptable to new threats. After the discovery, Hugging Face swiftly removed the malicious models and updated the Picklescan tool to address this vulnerability.

The security vulnerability in Pickle file serialization was uncovered by Hugging Face itself and third-party scanners. To mitigate future risks, Hugging Face has implemented stricter security measures, including scanning third-party models for malware, Pickle files, Keras Lambda layers, and secrets. However, these malicious models slipped through the initial security net, highlighting the ongoing challenge of protecting against novel cyber threats.

Read also:

Latest