With the risks of generative AI leaking organisational information, and the growing number of tools that natively incorporate GenAI (eg Adobe reader, Edge, SaaS packages), how do you ensure that your staff use only approved tools?

2.4k views1 Upvote4 Comments
Sort By:
Oldest
IT Manager in Constructiona month ago
That's a challenging question potentially without a solution simply because we are in a context where situation changes suddenly: you are in and accept the challenge and risks, you are out and potentially your company will become less competitive.
I believe the real game changer is when you establish a trustworthy relationship with your staff: they get from you the best in tech but they must be responsible of the usage.

About the accountability I believe it is not direct connected: take in mind we can blame anyone but the it will not change a breach or a image damage for the company.
1
CIOa month ago
Great question. In the Gov space we've tackled this through a combo of policy, training, approved tools, awareness and blocking. There are vendors who offer secure tools, e.g., Google Gemini and Microsoft CoPilot, and are willing to enter into strong business associate agreements to protect HIPAA data and even CJIS data. Google will go even further if needed with CFR42 Part 2 protection. None of this is perfect, primarily because the landscape is changing so fast but with the policy, training and approved tools in place when risks or breaches are uncovered (and that will happen) you can take the appropriate action backed by your organization. 
1
Deputy Director IT Risk & Compliance11 days ago
The roll-out of AI toolsets with existing vendors is expanding at a rapid rate and will continue until some form of generative AI is built into just about any technology tool you can think of – and for good reason, GenAI holds the promise to be transformative in our organizations, saving time and resources while improving job satisfaction for our staff. 

While we go through this transition period – updating documents, waiting for regulations and other compliance requirements to drop – we can act pro-actively in our organizations to minimize the impact to our organizations. Here are a couple of ideas. 

Think about assembling an “AI Asset Inventory” this is a collection of:

1- Network traffic to common AI platforms such as ChatGPT
2- Known vendors who are deploying AI tools such as Google or Microsoft
3- Any AI Use Cases that you have in your environment, perhaps a pilot for Google Workspaces, or a chatbot project to improve customer experience

Next, use your procurement team to intercept AI integrated vendors. You can do a review during contract renewals, you can review your enterprise-level apps and/or critical apps to see what they might be using, finally as new vendors come in you can work through vetting their AI capabilities. There are several tools out there, NIST is a good resource in the United States, and materials that support the EU AI Act are useful in Europe. You really only need to ask a few gatekeeping questions around data integrity, model / output drift, and notifications around implementing or updating AI tools, or promptly reporting known problems similar to a data breach notification. 

1
lock icon

Please join or sign in to view more content.

By joining the Peer Community, you'll get:

  • Peer Discussions and Polls
  • One-Minute Insights
  • Connect with like-minded individuals
Director of IT7 days ago
Ensure the right governance, risk, security and responsible AI guardrails are in place, in addition to ensuring staff understand the architectures and capabilities of GenAI. Many will have inbuilt guardrails to protect enterprise data and will not share that data externally, nor use it to enhance the AI model (this includes prompts). Also ensure ongoing training initiatives.

Content you might like

TCO19%

Pricing26%

Integrations21%

Alignment with Cloud Provider7%

Security10%

Alignment with Existing IT Skills4%

Product / Feature Set7%

Vendor Relationship / Reputation

Other (comment)

View Results
5.7k views3 Upvotes1 Comment

Human Factors (fears, mental health, physical spacing)85%

Technical / IT Factors (on-premise tools, pivoting back away from remote)14%

3.7k views3 Upvotes2 Comments