In the world of AI security, we have long focused on software-level threats: data poisoning, prompt injection, and API abuse. A breakthrough academic paper published in October 2025 has just rendered that focus dangerously incomplete.
The paper, titled “DeepSteal 2.0: AI Model Weight Extraction via Covert Hardware Channels” (arXiv:2510.00151), details a novel, hardware-level attack that can steal the complete “weights”—the core intellectual property of an AI model—from any device with an AI accelerator.
This attack is not theoretical. It is a proven proof-of-concept that works on NVIDIA GPUs, Google TPUs, and custom NPUs. It is silent, undetectable by current security tools, and agnostic to the AI model’s architecture. For any organization building or deploying proprietary AI models, this represents a new, universal threat to your most valuable IP.

The Attack Explained: Stealing an AI’s “Brain” at the Hardware Level
The DeepSteal 2.0 attack is a sophisticated, two-phase operation that combines supply chain compromise with a side-channel attack.
Phase 1: The Hardware Trojan Injection
The attack begins long before the AI model is ever run. A malicious hardware Trojan—a tiny, malicious modification to the chip’s circuitry—is inserted into the AI accelerator during the manufacturing process. This requires a compromised semiconductor fabrication plant or a malicious actor within the hardware supply chain.
Phase 2: Covert Channel Exfiltration
Once the compromised hardware is deployed in a data center or device, the Trojan lies dormant. It does not interfere with the AI’s normal computations, so it generates no performance degradation or error logs.
When the AI model’s weights are loaded into the accelerator’s on-chip memory for inference, the Trojan activates. It accesses this memory and begins to leak the model weights, bit by bit, through a covert channel. This is not a traditional network connection; the Trojan manipulates the power consumption or electromagnetic emissions of the chip to broadcast the data wirelessly.
An attacker with a nearby software-defined radio (within a range of 100-500 meters) can intercept these faint signals. Over time, they can reconstruct the entire set of model weights, effectively stealing the AI’s “brain.”
| Attack Vector | Traditional Model Theft | DeepSteal 2.0 (Hardware Trojan) |
|---|---|---|
| Method | Repeated API queries to infer model behavior. | Passive listening to a covert hardware channel. |
| Detection | Detectable via API logs (high query volume, strange inputs). | Undetectable by software-based security tools. |
| Impact on Performance | Can degrade service performance. | None. The AI operates normally during the theft. |
| Prerequisites | API access, significant compute budget. | Supply chain compromise, physical proximity. |
This is a revolutionary attack because it bypasses all existing software-level defenses. Your firewalls, intrusion detection systems, and API gateways are completely blind to it.
Expert Quote: “We’ve been guarding the front door with API security while attackers have been building a secret tunnel directly into the vault. The DeepSteal paper proves that without a secure hardware foundation, all of our AI software security is just a comforting illusion.”
The Multi-Billion Dollar Impact on the AI Industry
The economic and strategic implications of this attack are staggering. The most valuable asset of any AI company is the weights of its proprietary models, which can be worth hundreds of millions or even billions of dollars.
Who is at risk?
- AI Model Developers (OpenAI, Google, Anthropic): Their foundational models are the primary targets.
- Cloud Providers (AWS, Azure, GCP): Their massive data centers are filled with AI accelerators. A widespread supply chain compromise could mean that thousands of their chips are Trojaned, allowing attackers to steal customer models running on their cloud.
- Enterprise AI Users: Any company fine-tuning a model on its proprietary data is at risk. This includes:
- Financial Services: Stealing a high-frequency trading model.
- Healthcare: Stealing a diagnostic AI trained on sensitive patient data.
- Autonomous Vehicles: Stealing the core perception and driving models.
This is not just a threat to a single company; it’s a systemic risk to the entire AI ecosystem. State-sponsored actors with the ability to influence the semiconductor supply chain are in the strongest position to execute this attack at scale. For more on the landscape of state-sponsored threats, see our guide on AI Cybersecurity Defense Strategies.
A Framework for Defense: From Faraday Cages to Formal Verification
Defending against a hardware-level threat requires a new, multi-layered approach.
Immediate Mitigation (Short-Term)
- Physical & Environmental Security: The most immediate defense is to block the covert channel. This means placing critical AI servers inside Faraday cages to block electromagnetic emissions. Data centers must also implement strict physical access controls and sweep for unauthorized wireless receivers.
- Trusted Hardware Sourcing: Immediately audit your hardware supply chain. Do you know where your GPUs and TPUs were manufactured? Prioritize hardware from trusted, verifiable sources. This is a critical component of third-party cyber risk management.
Architectural Defenses (Medium-Term)
- Model Encryption: While the Trojan can access on-chip memory, encrypting the model weights at rest and only decrypting them within a secure enclave can raise the bar for the attacker.
- Defensive Mutation: Introduce a small amount of random “noise” into the model weights. This has a negligible impact on performance but can corrupt the data exfiltrated by the Trojan, making it useless to the attacker. This is a form of adversarial ML defense.
Industry-Wide Solutions (Long-Term)
- Hardware Security Verification: The semiconductor industry must adopt rigorous security standards, including formal verification of chip designs to detect malicious logic before manufacturing.
- Supply Chain Provenance: We need a “birth certificate” for every chip, tracking its journey from the design phase to deployment.
Conclusion: The Hardware-Software Security Contract is Broken
The DeepSteal 2.0 paper proves that the implicit trust we place in our hardware is no longer viable in the age of AI. The assumption that our chips will execute our code faithfully has been shattered.
This means that AI model protection is no longer just a software problem. It is a full-stack problem, from the silicon in the fab to the API call in the cloud. Every CTO and CISO must now operate under the assumption that their AI models could be stolen at the hardware level.
Organizations must begin a radical re-evaluation of their AI security posture, investing in physical security, hardware provenance, and new defensive techniques. The failure to do so will put their most valuable intellectual property at risk of being silently stolen by an invisible adversary. For guidance on building a comprehensive security program, refer to our AI Governance Policy Framework Guide.
The BC Threat Intelligence Group
SOURCES
- https://wordpress.org/plugins/schema-and-structured-data-for-wp/
- https://kinsta.com/blog/schema-markup-wordpress/
- https://www.wpbeginner.com/wp-tutorials/how-to-add-schema-markup-in-wordpress-and-woocommerce/
- https://wpschema.com
- https://addalittledigital.com/wordpress-schema-markup/
- https://aioseo.com/add-schema-markup-to-wordpress-without-a-plugin/
- https://wp95.net/wordpress-schema-markup/
- https://codewatchers.com/en/blog/the-ultimate-guide-to-schema-markup-in-wordpress-no-plugins-required
- https://www.youtube.com/watch?v=TR-G0cWcEt0
- https://wordpress.org/plugins/schema/