We've seen AI/GenAI arrive as a capability enhancer to existing products/platforms/apps/etc. To what extent should an organization be proactive and aware of existing or pending "AI" capabilities that otherwise "just arrive" to users. Are organizations now profiling each deployed commercial product/platform/app ? And once known, are these commercial products/platforms/apps designed to disable AI/GenAI features? Could there be a better way - such as having an extended "AI" profile on the FedRAMP marketplace (or equivalent) ? Thoughts ?
Sort By:
Oldest
AVP of Information Security2 months ago
My thoughts are that organizations are accountable to know where their customer data is being transmitted, processed, or stored at all times. This should include AI and LLM. In certain environments, clients may require approval before submitting data to such platforms. As to whether there is a better way, there are frameworks already out there for AI but it seems like most of them are still in draft and not finalized. One example is the NIST AI Risk Management framework referenced below.AI Risk Management Framework | NIST. There are some great resources, but it seems like it's very early.
Proactive Monitoring: Ensures risks are managed, compliance is maintained, and benefits are optimized.
Profiling Deployed Products: Identifies AI/GenAI integration, unintended functionalities, and impacts on security and user experience.
Disabling AI/GenAI Features: Aligns technology use with policies and regulatory requirements.
Better Approaches (Extended “AI” Profile): Provides a standardized way to evaluate and compare AI/GenAI capabilities, aiding in decision-making.