Breaking News

Contrast Security adds new feature to help protect against prompt injection in LLMs

https://ift.tt/AHjXdsN

Prompt injection — attacks that involve inserting something malicious into an LLM prompt to get an application to execute unauthorized code — topped the recently released OWASP Top 10 for LLMs. 

According to Contrast, this could result in an LLM outputting incorrect or malicious responses, producing malicious code, circumventing content filters, or leaking sensitive data. Prompt injections can be introduced through any data sources an LLM relies on, such as websites, emails, and documents. 

To help companies protect against this, the company now supports testing LLMs from OpenAI in its application security testing (AST) platform.

It uses runtime security to monitor the behavior of an application, rather than just scanning source code. Any user input that is sent through OpenAI’s API to an LLM triggers the prompt injection test. 

According to the company, this method is fast, easy, and accurate, and can notify developers quickly of any issues. 

“As project lead for the new OWASP Top 10 for LLMs, I can say our group looked deeply at many attack vectors against LLMs. Prompt Injection repeatedly rose to the top of the list in our expert group voting for the most important vulnerability,” said Steve Wilson, chief product officer at Contrast. “Contrast is the first security solution to respond to this new industry standard list by delivering this capability. Organizations can now identify susceptible data flows to their LLMs, providing security with the visibility needed to identify risks and prevent unintended exposure.”

 

The post Contrast Security adds new feature to help protect against prompt injection in LLMs appeared first on SD Times.



Tech Developers

No comments