• Cyber Syrup
  • Posts
  • Critical Vulnerability Found in Meta's Llama AI Framework: A Deep Dive into AI Security Risks

Critical Vulnerability Found in Meta's Llama AI Framework: A Deep Dive into AI Security Risks

A high-severity security flaw has been disclosed in Meta’s Llama large language model (LLM) framework

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.

Critical Vulnerability Found in Meta's Llama AI Framework: A Deep Dive into AI Security Risks

A high-severity security flaw has been disclosed in Meta’s Llama large language model (LLM) framework, which, if successfully exploited, could allow attackers to execute arbitrary code on the llama-stack inference server.

The vulnerability, tracked as CVE-2024-50050, has been assigned a CVSS score of 6.3 out of 10.0. However, supply chain security firm Snyk has rated it as critical, assigning it a severity score of 9.3.

According to Oligo Security researcher Avi Lumelsky, the issue stems from the deserialization of untrusted data, meaning that a threat actor could execute arbitrary code by sending maliciously crafted data to the system.

How the Exploit Works

The flaw is located in Llama Stack, a set of API interfaces that developers use to integrate Meta’s Llama AI models into their applications. The core of the issue is a remote code execution (RCE) vulnerability in the reference Python Inference API implementation.

The problem arises due to automatic deserialization of Python objects using pickle, a widely used but inherently risky serialization format. When untrusted data is deserialized via pickle, it can lead to arbitrary code execution, allowing attackers to take control of the host system.

The attack method involves:

  1. Exploiting a ZeroMQ socket exposed over the network.

  2. Sending malicious objects to the socket.

  3. Triggering the unpickling process, which executes the embedded malicious code.

Meta’s Response and Fix

Meta was informed of this issue on September 24, 2024, and addressed it on October 10, 2024, in version 0.0.41. The vulnerability was mitigated by removing pickle-based serialization and replacing it with JSON format, which is safer and prevents arbitrary code execution.

A related patch was also implemented in pyzmq, the Python library providing access to ZeroMQ messaging.

AI Frameworks and Recurrent Security Risks

This is not the first time that deserialization vulnerabilities have been discovered in AI frameworks. In August 2024, Oligo Security found a shadow vulnerability in TensorFlow’s Keras framework that bypassed CVE-2024-3660, allowing arbitrary code execution.

These findings highlight how AI and machine learning frameworks are increasingly becoming a target for cyber threats, particularly through deserialization attacks.

Newly Discovered Exploit in OpenAI’s ChatGPT Crawler

Adding to concerns about AI-related security risks, security researcher Benjamin Flesch has identified a high-severity flaw in OpenAI’s ChatGPT web crawler that could be exploited for Distributed Denial-of-Service (DDoS) attacks.

The issue arises from improper handling of HTTP POST requests sent to the ChatGPT API backend. The vulnerability exists in the "chatgpt[.]com/backend-api/attributions" API, which accepts lists of URLs as input but lacks:

  • Duplicate request filtering

  • Rate limiting

This loophole enables attackers to flood a target website with thousands of automated requests, overwhelming its resources.

By crafting a single HTTP request containing thousands of hyperlinks, an attacker can trigger OpenAI’s web crawler to generate a flood of requests, causing a significant amplification factor—an effective method for executing DDoS attacks.

OpenAI has since patched the flaw, reinforcing its API request validation mechanisms.

AI's Role in Cybersecurity: Risks and Mitigations

The increasing adoption of large language models (LLMs) in software development and enterprise environments is reshaping the cybersecurity landscape—both for defenders and attackers.

According to Deep Instinct researcher Mark Vaitzman, LLMs are not creating new cyber threats but are making existing threats more efficient, scalable, and precise. Key ways attackers are leveraging AI include:

  • Automating social engineering attacks.

  • Developing sophisticated malware variants.

  • Enhancing phishing campaigns with AI-generated content.

  • Automating reconnaissance and vulnerability scanning.

This trend reinforces the need for proactive security measures, including:

  • Secure coding practices in AI development.

  • Enhanced authentication and input validation mechanisms.

  • Regular vulnerability assessments of AI-based systems.

Emerging AI Security Research: ShadowGenes & Model Tracking

Recent research has introduced ShadowGenes, a new method for identifying AI model genealogy by analyzing their computational graphs.

Building on a previous attack technique known as ShadowLogic, ShadowGenes allows researchers to trace the lineage of AI models, determining their architecture, type, and family. This method can be used to:

  • Detect unauthorized modifications to AI models.

  • Track AI model deployments across different organizations.

  • Identify similarities between AI models to uncover security flaws.

AI security firm HiddenLayer notes that identifying model families within an organization strengthens cybersecurity defenses, ensuring that organizations maintain control over their AI infrastructure.

Conclusion: Strengthening AI Cybersecurity

The discovery of vulnerabilities in Meta’s Llama framework, OpenAI’s ChatGPT API, and TensorFlow’s Keras framework serves as a critical reminder of the security risks associated with AI development.

As AI technologies become deeply integrated into applications, developers and security teams must prioritize:

  • Robust input validation to prevent deserialization attacks.

  • Improved API security to mitigate DDoS amplification risks.

  • Ongoing security research to track AI model evolution and genealogy.

By addressing these challenges proactively, the industry can ensure that AI innovations remain secure, reliable, and resistant to cyber threats.