Protect AI’s Guardian has been added to Hugging Face’s model scanners, providing comprehensive security alerts and deep insights into the security of over 1 million foundational ML models
SEATTLE & BROOKLYN, NY, October 22, 2024–(BUSINESS WIRE)–Protect AI, the leading artificial intelligence (AI) and Machine Learning (ML) security company, and Hugging Face, the world’s fastest growing community and most widely used machine learning platform, have today announced a partnership to provide security for the Hugging Face Hub, the world’s largest repository of ML models. Protect AI’s Guardian has been added as a scanner to the Hugging Face platform, providing comprehensive security alerts and deep insights into the security of fundamental models before use.
The growing democratization of artificial intelligence and machine learning is largely driven by the accessibility of open-source ‘Foundational Models’ on platforms like Hugging Face. Today, the Hugging Face Hub is home to more than a million freely accessible models, used by more than 5 million users. More than 100,000 organizations collaborate privately on hundreds of thousands of private models. These models are critical for powering a wide range of AI applications.
However, this trend also poses security risks, as the open sharing of files on these repositories can lead to the unintentional spread of malicious software among users. Once added to a model, invisible malicious code can be executed to steal data and credentials, poison data, and much more. Through the Huntr Bug Bounty community and proprietary research, Protect AI has identified thousands of unique threats in models commonly used in production today.
The Protect AI – Hugging Face partnership is a proactive response to these increasing AI security risks and is designed to help organizations strike a balance between protecting their AI and enabling speed of innovation. By scanning foundational models with Protect AI’s Guardian, Hugging Face enables the secure and trusted delivery of ML models to the global AI community, fostering a transparent environment where innovation thrives without compromising trust or security.
“Protect AI is committed to helping build a more secure, AI-powered world, and has taken significant steps to secure the AI supply chain by actively contributing and maintaining open source security tools, and through our 15,000 member threat hunting research community to identify and provide remediation guidance for AI vulnerabilities,” said Ian Swanson, CEO and co-founder of Protect AI. “This collaboration helps us further deliver on our promise, and we couldn’t be more excited to work with Hugging Face to help accelerate the safe and trusted delivery of AI models to the global community.”
Protect AI’s Guardian is the industry’s leading model security solution that scans both internally built and externally acquired models for threats. As part of the Protect AI Security Platform, Guardian offers the most comprehensive model scanning capabilities, supporting an extensive list of model files and formats including Tensorflow, Keras, XGboost and more. Guardian has been added to the Hugging Face platform, where it continuously scans all models in the Hugging Face repository, allowing users to understand the security posture of a model they are exploring for use. Users who interact with a model are shown its security status and gain deep insights into potentially compromised models, adding a critical layer of security and trust to experimenting and developing ML models.
“At Hugging Face, we take security seriously. As AI rapidly evolves, new threat vectors are emerging seemingly every day,” said Julien Chaumond, co-founder of Hugging Face. “We are very impressed with the work Protect AI has done in the community. Combined with Guardian’s scanning capabilities, they were an obvious choice to help our users responsibly experiment with and operationalize AI/ ML systems and technologies.”
In addition to understanding the security status of each model within Hugging Face, users can also access a corresponding security report on Protect AI’s Insights DB, an essential educational tool that helps companies not only understand a model’s security and safety, but gain crucial knowledge about the specific risks associated with detected threats. Protect AI’s Insights DB is continuously updated with exclusive access to findings from Protect AI’s Threat Research team and its huntr AI/ML bug bounty community.
About Protect AI
Protect AI enables organizations to secure their AI applications with comprehensive AI Security Posture Management (AI-SPM) capabilities, allowing them to effectively see, know and manage their ML environments. The Protect AI Platform provides end-to-end visibility, remediation, control and governance, protecting AI/ML systems from security threats and risks. Protect AI was founded by AI leaders from Amazon and Oracle and is backed by top investors including Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures, 01 Advisors, Samsung, StepStone Group and Salesforce Ventures. The company is headquartered in Seattle with offices in Berlin and Bangalore. For more information, visit our website and follow us on LinkedIn and Tweet.
About hugging face
Hugging Face is the collaboration platform for the machine learning community. The Hugging Face Hub works as a central place where everyone can share, explore, discover, and experiment with open-source ML. HF enables the next generation of machine learning engineers, scientists and end users to learn, collaborate and share their work to build an open and ethical AI future together. With its rapidly growing community, some of the most widely used open-source ML libraries and tools, and a talented scientific team exploring the boundaries of technology, Hugging Face is at the heart of the AI revolution.
View the source version on businesswire.com: https://www.businesswire.com/news/home/20241022929828/en/
Contacts
Media:
Marc Gendron
Marc Gendron PR for Protect AI
marc@mgpr.net
+1 617-877-7480