Has your org taken any steps to build a culture of ethical AI? Do you have dedicated resources or teams responsible for AI ethics?
Sort By:
Oldest
Director of ITa month ago
Apologies for the odd formatting. Something changed from the post draft to what was published.
Director of Corporate Development in Energy and Utilitiesa month ago
Our approach has been deployment under compliance, users have capability to upload data sets, data privacy and importantly capturing feedback loop. All the internal information remains internal and capturing non-relevant information to make sure this is in compliance with organization values. The RAG internal to organization allows us to adhere to compliance, data privacy and security is embedded with relevant words captured during analysis. Every use case is driven by business case and ROI within a 3/5-year span including time to market, cost avoidance, business value, risk avoidance. Real value is driven by risk and cost avoidance with time to market and lastly competitive advantage.'
Director of IT7 days ago
Yes. Full program, and dedicated resources. However, it is the responsibility of all employees.IT Manager in Construction5 days ago
We have a Governance team for the AI but not an appointed person. I am a certified corporate ethical manager but and I don't cover formally this role. Anyway it would be better to have a dedicated profile, fully agreed.VP of IT in Retail3 days ago
Yes, we've taken significant steps to establish a culture of ethical AI. We've implemented a "Responsible AI Pledge" that outlines its commitment to transparency, security, privacy, fairness, accountability, and customer-centricity in its AI practices. We have a dedicated resources and teams specifically focused on AI ethics to ensure that our AI systems are developed and used responsibly, mitigating potential biases and ensuring that they align with the our values.
1.Data Privacy:
1.User Consent: Obtain explicit consent for the collection and use of personal data
2.Data Security: Implement robust encryption and anonymization techniques to protect user data
3.Control and Transparency: Provide users with control over their data and transparency about how it’s used
2.Ethical Boundaries:
1.Legal Compliance: Ensure AI solutions comply with all relevant laws and regulations
2.Moral Considerations: Develop AI with a focus on fairness, accountability, and societal impact
3.Proactive Ethics: Go beyond legal requirements to address broader ethical implications of AI
3.Bias Mitigation:
1.Diverse Data Sets: Use diverse and representative data to train AI systems to prevent biases
2.Continuous Monitoring: Regularly review AI decisions for potential biases and take corrective actions
3.Inclusive Design: Involve diverse teams in the design and development process to identify and mitigate biases
4.Transparency:
1.Understandable AI: Make the AI’s decision-making processes clear and understandable to users
2.Open Communication: Inform stakeholders about the AI’s purpose, development, and operational procedures
3.Oversight and Updates: Establish mechanisms for internal and external oversight and regular updates of AI systems
5.Security:
1.Risk Management: Assess and manage security risks throughout the AI system’s lifecycle
2.Secure Architecture: Design AI systems with security in mind from the ground up
3.Incident Response: Prepare for and respond to security incidents involving AI systems