If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks?

Extremely concerned — it’s a major risk12%

Somewhat concerned — it's a potential risk74%

Mildly concerned — it’s on my radar9%

Not particularly concerned — I doubt we’ll be impacted3%

lock icon

Please join or sign in to view more content.

362 PARTICIPANTS
3.5k views2 Comments
Sort By:
Oldest
Board Member, Advisor, Executive Coach in Softwarea year ago
What many dont realize is that AI or more accurately ML models themselves are not protected - whether it be a model used as a virtual assistant or an ML model used in a trading platform in the financial industry or an ML model embedded in an application like a CRM or even your security tools.  So we should be asking a a much broader question on the risks any ML model poses to our organizations and our customers
Chief Data Scientist in IT Services2 months ago
This has been a risk for as long as IT systems have been around. I feel like were using the word prompt to talk about Generative AI solutions, but there is alot of solutions that based on Conversational AI. But any solutions that has access to your back end and integration is a risk of attack.

Content you might like

No Increase16%

1-5% increase47%

6-25% increase24%

26-50% increase6%

51-75% increase1%

76%+1%

Other2%

View Results
1.7k views1 Upvote
IT Manager in Constructiona month ago
Hello,
the topic is so broad, what are you focused on?
Read More Comments
4.8k views2 Upvotes5 Comments

Human Factors (fears, mental health, physical spacing)85%

Technical / IT Factors (on-premise tools, pivoting back away from remote)14%

3.7k views3 Upvotes2 Comments