LLM security

The BroadChannel Context Window Poisoning Report

In May 2025, researchers at Backslash Security demonstrated a terrifying new attack vector against large language models (LLMs). By creating…

1 week ago

The BroadChannel AI Poisoning Discovery: How 250 Docs Can Backdoor LLMs

In October 2025, research published by Anthropic, in collaboration with the UK AI Security Institute and the Alan Turing Institute,…

1 week ago

Prompt Injection Defense: A CTO’s Protocol to Secure Enterprise LLMs

URGENT CTO DIRECTIVE: Your new enterprise chatbot, connected to your internal inventory API, just processed a user query: "What's in…

4 weeks ago