If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks?
Extremely concerned — it’s a major risk12%
Somewhat concerned — it's a potential risk74%
Mildly concerned — it’s on my radar9%
Not particularly concerned — I doubt we’ll be impacted3%
362 PARTICIPANTS
Sort By:
Oldest
Board Member, Advisor, Executive Coach in Softwarea year ago
What many dont realize is that AI or more accurately ML models themselves are not protected - whether it be a model used as a virtual assistant or an ML model used in a trading platform in the financial industry or an ML model embedded in an application like a CRM or even your security tools. So we should be asking a a much broader question on the risks any ML model poses to our organizations and our customersChief Data Scientist in IT Services2 months ago
This has been a risk for as long as IT systems have been around. I feel like were using the word prompt to talk about Generative AI solutions, but there is alot of solutions that based on Conversational AI. But any solutions that has access to your back end and integration is a risk of attack.