Hosting
Wednesday, February 5, 2025
Google search engine
HomeArtificial IntelligenceApple opens PCC source code to help researchers identify bugs in Cloud...

Apple opens PCC source code to help researchers identify bugs in Cloud AI security


October 25, 2024Ravie LakshmananCloud security / Artificial intelligence

Apple has made its Private Cloud Compute (PCC) Virtual Research Environment (VRE) publicly available, allowing the research community to inspect and verify the privacy and security safeguards of its offering.

PCC, which Apple unveiled earlier this year, is being marketed as the “most advanced security architecture ever deployed for cloud AI computing at scale.” The new technology aims to move computationally complex Apple Intelligence requests to the cloud in a way that doesn’t compromise user privacy.

Apple said it invites “all security and privacy researchers (or anyone with an interest and technical curiosity) to learn more about PCC and conduct their own independent verification of our claims.”

To further encourage research, the iPhone maker said it is expanding the Apple Security Bounty program to include PCC by offering cash payouts ranging from $50,000 to $1,000,000 for security vulnerabilities identified therein.

Cybersecurity

This includes flaws that allow the execution of malicious code on the server, and exploits that can extract sensitive data from users or information about the user’s requests.

The VRE aims to provide a range of tools to enable researchers to conduct their analysis of PCC from the Mac. It comes with a virtual Secure Enclave Processor (SEP) and uses built-in macOS support for paravirtualized graphics to enable inference.

Apple also said it is making the source code associated with some components of PCC accessible via GitHub to enable deeper analysis. This includes CloudAttestation, Thimble, splunkloggingd and srd_tools.

“We designed Private Cloud Compute as part of Apple Intelligence to take an extraordinary step forward in privacy in AI,” said the Cupertino-based company. “This includes providing verifiable transparency – a unique feature that sets it apart from other server-based AI approaches.”

The development comes as broader research into generative artificial intelligence (AI) continues to discover new ways to jailbreak large language models (LLMs) and produce unintended output.

Cloud AI security

Earlier this week, Palo Alto Networks described a technique called Deceptive Delight, which combines malicious and benign queries to trick AI chatbots into bypassing their guardrails by taking advantage of their limited “attention spans.”

The attack requires at least two interactions and works by first asking the chatbot to logically connect several events – including a narrow topic (e.g. how to make a bomb) – and then asking it to elaborate on the details of each event.

Researchers have also demonstrated a so-called ConfusedPilot attack, which targets Retrieval-Augmented Generation (RAG)-based AI systems such as Microsoft 365 Copilot by poisoning the data environment with a seemingly innocuous document containing specifically crafted strings.

“This attack enables manipulation of AI responses by simply adding malicious content to documents referenced by the AI ​​system, potentially leading to widespread misinformation and compromised decision-making processes within the organization,” Symmetry Systems said.

Cybersecurity

In addition, it has been shown that it is possible to tamper with the computational graphics of a machine learning model and insert ‘codeless, stealthy’ backdoors into pre-trained models such as ResNet, YOLO and Phi-3, a technique codenamed ShadowLogic.

“Backdoors created using this technique will persist through refinement, meaning that basic models can be hijacked to trigger attacker-defined behavior in any downstream application when a trigger input is received, making this attack technique has a major impact on the AI ​​supply chain.” This is what Hidden Layer researchers Eoin Wickens, Kasimir Schulz and Tom Bonner say.

“Unlike standard software backdoors that rely on executing malicious code, these backdoors are embedded in the structure of the model, making them more difficult to detect and mitigate.”

Did you find this article interesting? Follow us further Tweet and LinkedIn to read more exclusive content we post.





Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular