DeepSeek R1 Fails Every Security Test
Recent studies by Cisco, Adversa AI, and Palo Alto Networks have revealed serious security flaws in DeepSeek R1, a generative AI model developed in China. Experts found that the chatbot lacks effective protections against common jailbreak techniques, making it vulnerable to misuse, misinformation, and the generation of harmful content.
100% Jailbreak Success Rate: The HarmBench Test
Security researchers tested DeepSeek R1 using HarmBench, a dataset designed to assess AI model vulnerabilities against harmful content. The dataset includes 50 prompts across six categories of dangerous behavior, such as:
- Cybercrime
- Disinformation
- Illegal activities
- Instructions for making chemical weapons
The results were alarming: DeepSeek R1 failed every single test, with a 100% success rate for jailbreak attacks. The model did not block a single harmful prompt, allowing unrestricted responses to illegal or dangerous queries.

How Does DeepSeek Compare to Other AI Models?
Compared to other AI models, DeepSeek R1 performed the worst in security tests:
- Meta's Llama 3.1 also struggled, with a jailbreak success rate of 96%.
- OpenAI's o1 model, however, blocked 74% of the attacks, proving significantly more secure.
This confirms that DeepSeek R1 lacks even the most basic safeguards, making it one of the least protected AI chatbots currently available.
Experts Warn: "A Major AI Security Risk"
According to DJ Sampath, VP of AI Software at Cisco, DeepSeek R1 is a textbook example of cost-cutting at the expense of security. The model was trained with just 2,000 NVIDIA H800 GPUs and a relatively low investment of under $6 million, resulting in a major trade-off: efficiency over protection.
Alex Polyakov, CEO of Adversa AI, stated:
“Every jailbreak method we tested worked flawlessly. The most concerning part is that these techniques have been known for years.”
A High-Risk AI Model
With no red teaming or ongoing security evaluations, DeepSeek R1 is highly vulnerable to manipulation. As AI models become more widely integrated into business and government systems, unprotected models like DeepSeek pose a serious threat to data security and public safety.