What has been your experience with third-party solutions adding GenAI capabilities to their offerings? Have you had to re-negotiate your agreements in order to safeguard data that you do not feel is appropriate for use with GenAI technology?
Sort By:
Oldest
Data Science & AI Expert in Miscellaneous4 months ago
Although it depends on the case, but the short answer is yes. In some cases addition of such features to an existing third party app can make it non compliant with the existing governance policies and require its deactivation until the matter is addressed.Founder, CEO in Services (non-Government)4 months ago
As an organization, our current position is to restrict explicit Gen AI capabilities or add-ons due to security risks. Our Enterprise security team consistently evolves and assesses policies to strike a balance between risk mitigation and the utilization of these tools for productivity purposes. I anticipate that our current stance of limiting these tools may change in the future.Implicit AI tools, such as those integrated into pre-approved enterprise applications like Microsoft Teams AI note-taking feature, are generally acceptable. However, the introduction of entirely new applications warrants careful consideration.
It's a good time to ask questions like:
What data does the third party have access to?
What data is kept by the third party, how long is it kept, and who has access to it?
What terms govern the third party's use of that data, and what consequences are there for misuse or data breaches?
In most cases, we should be getting clear assurances in writing that data is only stored to deliver functionality. Data is deleted once that purpose is complete, and it is not transferred to another company in any form. Data isn't used for model training, even if it's anonymized. If a company can't provide that basic framework, it's time to plan a migration to an alternative.