What are you doing to prevent shadow AI practices?
Sort By:
Oldest
CIO2 months ago
We are trying to prevent shadow AI practices through policy. Detecting all the uses of AI is challenging. Many people in IT are interested in using AI to lighten their workload, be more efficient, or leverage a better position. We have a lot of people on my team who are eager to use AI, some even signing up for usage on their own. However, they're not always forthcoming with their usage. We have a strict policy against such experimentation, but enforcement is the issue. When we do find an instance of shadow AI use, we try to take appropriate action. Still, it's fair to say that we don't have a good handle on the shadow use of AI within the organization. It's a real concern, but I don't know that we have a way to detect, stop, or control it.CIO in Education2 months ago
Conceptually, we understand the issue of shadow AI. We know students have been using these generative AI tools, whether we provide them or not, and they don't really want us to know that they're using them. If we want to call that shadow AI, that's fair. I'm not sure that there's anything that we can do about it technologically at this point. We want to encourage people to use AI where it makes sense and then put the policy or the guidelines in place. If you're going to use it academically, then these are the rules: cite your work, tell me where you used the AI, and your independent thought. For now, this is our approach. We have a group that meets to discuss the use of AI, the direction we want to go, how to encourage use, and how to incorporate it into what we do from an academic perspective. We've made Copilot for 365 available and asked people to give us the use case for using it. If it makes sense, we give it to them. But the bigger issue for us is where AI is going to be used, how it's going to be used, and how we're going to go about this as an organization to say these are the ground rules for using it effectively.