Breaking News

DeepSeek is unsafe for enterprise use, tests reveal

https://ift.tt/mni7fjI

The birth of China’s DeepSeek AI technology clearly sent shockwaves throughout the industry,
with many lauding it as a faster, smarter and cheaper alternative to well-established LLMs.
However, similar to the hype train we saw (and continue to see) for the likes of OpenAI and
ChatGPT’s current and future capabilities, the reality of its prowess lies somewhere between the
dazzling controlled demonstrations and significant dysfunction, especially from a security
perspective.

Recent research by AppSOC revealed critical failures in multiple areas, including susceptibility
to jailbreaking, prompt injection, and other security toxicity, with researchers particularly
disturbed by the ease with which malware and viruses can be created using the tool. This
renders it too risky for business and enterprise use, but that is not going to stop it from being
rolled out, often without the knowledge or approval of enterprise security leadership.

With approximately 76% of developers using or planning to use AI tooling in the software
development process, the well-documented security risks of many AI models should be a high
priority to actively mitigate against, and DeepSeek’s high accessibility and rapid adoption
positions it a challenging potential threat vector. However, the right safeguards and guidelines
can take the security sting out of its tail, long-term.

DeepSeek: The Ideal Pair Programming Partner?

One of the first impressive use cases for DeepSeek was its ability to produce quality, functional
code to a standard deemed better than other open-source LLMs via its proprietary DeepSeek
Coder tool. Data from DeepSeek Coder’s GitHub page states:

“We evaluate DeepSeek Coder on various coding-related benchmarks. The result shows that
DeepSeek-Coder-Base-33B significantly outperforms existing open-source code LLMs.”

The extensive test results on the page offer tangible evidence that DeepSeek Coder is a solid
option against competitor LLMs, but how does it perform in a real development environment?
ZDNet’s David Gewirtz ran several coding tests with DeepSeek V3 and R1, with decidedly
mixed results, including outright failures and verbose code output. While there is a promising
trajectory, it would appear to be quite far from the seamless experience offered in many curated
demonstrations.

And we have barely touched on secure coding, as yet. Cybersecurity firms have already
uncovered that the technology has backdoors that send user information directly to servers
owned by the Chinese government, indicating that it is a significant risk to national security. In
addition to a penchant for creating malware and weakness in the face of jailbreaking attempts,
DeepSeek is said to contain outmoded cryptography, leaving it vulnerable to sensitive data
exposure and SQL injection.

Perhaps we can assume these elements will improve in subsequent updates, but independent
benchmarking from Baxbench, plus a recent research collaboration between academics in
China, Australia and New Zealand reveal that, in general, AI coding assistants produce insecure
code, with Baxbench in particular indicating that no current LLM is ready for code automation
from a security perspective. In any case, it will take security-adept developers to detect the
issues in the first place, not to mention mitigate them.

The issue is, developers will choose whatever AI model will do the job fastest and cheapest.
DeepSeek is functional, and above all, free, for quite powerful features and capabilities. I know
many developers are already using it, and in the absence of regulation or individual security
policies banning the installation of the tool, many more will adopt it, the end result being that
potential backdoors or vulnerabilities will make their way into enterprise codebases.

It cannot be overstated that security-skilled developers leveraging AI will benefit from
supercharged productivity, producing good code at a greater pace and volume. Low-skilled
developers, however, will achieve the same high levels of productivity and volume, but will be
filling repositories with poor, likely exploitable code. Enterprises that do not effectively manage
developer risk will be among the first to suffer.

Shadow AI remains a significant expander of the enterprise attack surface

CISOs are burdened with sprawling, overbearing tech stacks that create even more complexity
in an already complicated enterprise environment. Adding to that burden is the potential for
risky, out-of-policy tools being introduced by individuals who don’t understand the security
impact of their actions.
Wide, uncontrolled adoption – or worse, covert “shadow” use in development teams despite
restrictions – is a recipe for disaster. CISOs need to implement business-appropriate AI
guardrails and approved tools despite weakening or unclear legislation, or face the
consequences of rapid-fire poison into their repositories.
In addition, modern security programs must make developer-driven security a key driving force
of risk and vulnerability reduction, and that means investing in their ongoing security upskilling
as it relates to their role.
Conclusion

The AI space is evolving, seemingly at the speed of light, and while these advancements are
undoubtedly exciting, we as security professionals cannot lose sight of the risk involved in their
implementation at the enterprise level. DeepSeek is taking off across the world, but for most use
cases, it carries unacceptable cyber risk.
Security leaders should consider the following:
● Stringent internal AI policies: Banning AI tools altogether is not the solution, as many
developers will find a way around any restrictions and continue to compromise the
company. Investigate, test, and approve a small suite of AI tooling that can be safely
deployed according to established AI policies. Allow developers with proven security
skills to use AI on specific code repositories, and disallow those who have not been
verified.
● Custom security learning pathways for developers: Software development is
changing, and developers need to know how to navigate vulnerabilities in the languages
and frameworks they actively use, as well as apply working security knowledge to third-
party code, whether it is an external library or generated by an AI coding assistant. If
multi-faceted developer risk management, including continuous learning, is not part of
the enterprise security program, it falls behind.
● Get serious about threat modeling: Most enterprises are still not implementing threat
modeling in a seamless, functional way, and they especially do not involve developers.
This is a great opportunity to pair security-skilled developers (after all, they know their
code best) with their AppSec counterparts for enhanced threat modeling exercises, and
analyzing new AI threat vectors.

The post DeepSeek is unsafe for enterprise use, tests reveal appeared first on SD Times.



Tech Developers

No comments