What questions are you having to answer from the organization’s leadership team around risk and AI? Can you also share your advice on how to answer these questions?

2.8k views3 Comments
Sort By:
Oldest
Enterprise Architect6 months ago
I'm getting questions from outside a lot more than from my company internally. As a publicly traded company, the movement to bring generative AI directly involves the legal team. They need to be trained, and it's taking a while to get them trained. The fine lines of what is and isn't acceptable are still not clear in their minds. However, the innovative legal people are taking it and trying to move as fast as possible. They're asking us to go back to the data and the security teams before they engage legal. They want the data and privacy officers to sign off before the legal steps in. It's a hand-holding process that needs to be done. 
Executive Director of Technology in Healthcare and Biotech6 months ago
Most of our concerns are centered around ethics and the use of AI in the healthcare space. Healthcare data can be quite unreliable. For instance, some fields are recorded incorrectly in our system about 20% of the time. If you're using models or generative AI against those fields, you're going to create a biased system. This is a major concern in our space. You have to have some way to clean that data up. Another prime concern in the organization is where the data is going. We run our operations in a private instance of Azure, but there are still questions about that because it's not on-prem.
lock icon

Please join or sign in to view more content.

By joining the Peer Community, you'll get:

  • Peer Discussions and Polls
  • One-Minute Insights
  • Connect with like-minded individuals
Director, Experience Design in Education6 months ago
We're generally asked about three things: privacy risk, academic integrity because we're a post-secondary institution, and ethical use. These have all been covered in our draft artificial intelligence policy that we are currently working on. We've leveraged our records classification to guide what is and is not appropriate to submit to public large language models, which covers our privacy piece. We maintain academic integrity by requiring instructors to indicate if generative AI is acceptable for responses in certain assignments and also to provide guidance about how to cite it appropriately. We cover ethical use by referencing our code of conduct policy and directing staff with specific copyright concerns to our Institute for Teaching, Learning, and Technology. We also set the expectation that all AI outputs should be reviewed and verified for accuracy before they're used for other purposes.

Content you might like

VP of Global IT and Cybersecurity in Manufacturing6 years ago
Have clear business requirements up front, make sure the proposal includes items such as scope, timeline, cost, resources.
Read More Comments
22.1k views3 Upvotes28 Comments

TCO19%

Pricing26%

Integrations21%

Alignment with Cloud Provider7%

Security10%

Alignment with Existing IT Skills4%

Product / Feature Set7%

Vendor Relationship / Reputation

Other (comment)

View Results
5.7k views3 Upvotes1 Comment
VP of IT in Retail3 days ago
If you have a full Gartner license, they have a benchmarking tool that maps out to your industry.  It was useful for my needs.
701 views1 Comment
Director of IT in IT Services4 days ago
Implementation of Zero trust architecture, its modules across the organisation is a priority for us. So, we will be implementing zero trust strategies in IAM, inline with overall strategy.
1.4k views1 Comment

Yes, and it is always followed22%

Yes, but it is rarely followed54%

Some departments do, but not across the business14%

No9%

View Results
1.8k views2 Upvotes