Can anyone recommend application security best practices for generative AI tools?

3k views3 Comments
Sort By:
Oldest
Chief Information Security Officer in Softwarea year ago
I would say quite the same as a standard application with an additional focus on data exposure. My main focus when my company started using Generative AI was focus on data ingested by the AI (we are an EU based company), and also the security of the Generative AI provider.
2 1 Reply
Information and Security Office & Enterprise Data Governance/AI in Finance (non-banking)a year ago

I agree with  that the rules of engagement do not change with AI in the picture; you still have to follow the same rules of DevSecOps and have the same level of due diligence to ensure your code is designed to provide that is designed to do, including performing needed Threat Modeling, Design Reviews, keeping in mind transparency and accountability with Privacy by design and Security by Design.

1
lock icon

Please join or sign in to view more content.

By joining the Peer Community, you'll get:

  • Peer Discussions and Polls
  • One-Minute Insights
  • Connect with like-minded individuals
Deputy CISOa year ago
There are the following dimensions in my mind
1) controls like
     a) Input validation (while maintaining the spirit of natural language). you dont want your LLMs to crash / or elevate privilege
    b) ensure relevant privacy conditions are built in specially when the model attempts to store the questions as an input for its future learning. So while the user would appreciate the result LLMs will thru s/he may need his own data anonymized in results
   c) boundary conditions such that queries or its results dont overwhelm the environment and make service unavailable

2) "intelligence in response" such that the LLMs are not fooled in providing responses that might be counter protective.for example. "how to hack LLMs" may get no result but question like " is there current weakness that the LLM is slf healing" might, giving important reckon

3) the LLMs itself, like immutable logs, may need to be protected from tampering 
4) protect the knowledge set from where it currently responds based on incremental context learning. so that it doesn't get poisoned 

OWASP has a reference for LLMS OWASP Top 10 for Large Language Model Applications | OWASP Foundation, check it out pl

Keen to learn your perspectives when possible pl
1

Content you might like

TCO19%

Pricing26%

Integrations21%

Alignment with Cloud Provider7%

Security10%

Alignment with Existing IT Skills4%

Product / Feature Set7%

Vendor Relationship / Reputation

Other (comment)

View Results
5.7k views3 Upvotes1 Comment
18 views
VP of IT in Retail3 days ago
My previous organization implemented a strict one-strike policy for lost or damaged devices. While the first incident was considered an accident, repeat offenders were required to reimburse the company for the lost or damaged ...read more
82 views1 Comment

Yes79%

No20%

1.2k views