Malicious LiteLLM Packages on PyPI Inject Infostealer Malware Into Developer Systems
Security researchers have discovered that a threat actor group known as TeamPCP compromised LiteLLM packages on PyPI, the Python Package Index, injecting infostealer malware that activated when developers installed or updated the popular AI integration library. LiteLLM is widely used in the AI development community as a unified interface for calling multiple large language model APIs, meaning the supply chain attack potentially affected thousands of development environments across companies building AI applications. The incident represents one of the most significant supply chain attacks targeting the AI development ecosystem to date.
Attack Methodology
TeamPCP’s attack was sophisticated in its execution. The group registered packages with names extremely similar to the legitimate LiteLLM library, exploiting common typos and naming variations that developers might accidentally use when installing packages via pip. Several of the malicious packages were designed to function identically to the legitimate library while quietly installing a background process that harvested API keys, environment variables, SSH keys, and browser-stored credentials from the developer’s machine. The malware transmitted stolen data to command-and-control servers through encrypted channels disguised as normal HTTPS traffic.
Scope of Impact
PyPI download statistics indicate that the malicious packages were downloaded approximately 15,000 times before they were identified and removed. However, security researchers note that the actual number of compromised systems may be lower, as many downloads come from automated build systems and CI/CD pipelines that may not have executed the malware’s activation triggers. Nonetheless, the potential exposure is significant, as developers working with LLM APIs typically have access to valuable credentials including OpenAI, Anthropic, and Google Cloud API keys, as well as database connection strings and cloud infrastructure credentials.
The Growing Threat to AI Development Tools
The LiteLLM attack is part of a broader pattern of supply chain attacks targeting AI and machine learning development tools. The AI development ecosystem relies heavily on open-source libraries and package repositories, creating a large and often poorly monitored attack surface. Many AI developers, eager to experiment with new models and frameworks, install packages without thoroughly vetting their authenticity — a behavior that threat actors have learned to exploit. Security researchers have identified over 50 malicious packages targeting AI developers on PyPI in 2026 alone, a dramatic increase from previous years.
Recommendations for AI Developers
In response to the incident, security experts recommend several protective measures for AI developers. These include verifying package names and publishers before installation, using hash verification for all package downloads, implementing dependency scanning in CI/CD pipelines, and using virtual environments to isolate development dependencies from production systems. Organizations should also consider deploying private PyPI mirrors that only allow pre-vetted packages, and rotating all API keys and credentials that may have been exposed in environments where the malicious packages were installed.
Create Your Own QR Code for Free — Need a custom QR code for your project, business, or personal use? Try our free QR code generator to create high-quality QR codes instantly in PNG, SVG, and more formats.