Has anyone has developed a ChatGPT / use of AI policy? Or do you reference existing P&Ps?
Sort By:
Oldest
CISO in Softwarea year ago
I highly recommend checking out this paper that I and some other members of the community wrote to help businesses think about how to craft and have policies in this paper? https://team8.vc/rethink/cyber/a-cisos-guide-generative-ai-and-chatgpt-enterprise-risks/
Chief Information Security Officer in Healthcare and Biotecha year ago
We have still not developed the policy but we are start taking input from various sourcesCo-Founder in Services (non-Government)a year ago
Yes, part of the "acceptable use policy." Generative AI Policy For company-related Blogging and Social Media Employees may use Generative AI (ChatGPT, Google Bard and other AI) platforms to create content for blogging and social media only after obtaining prior approval from their supervisor or department head. All content created using Generative AI platforms must comply with the guidelines and restrictions outlined in this policy, including the prohibition on revealing confidential or proprietary information, making discriminatory or harassing comments, and attributing personal statements to . Additionally, employees must ensure that any content created using Generative AI platform is factually accurate and does not misrepresent or harm the image, reputation, or goodwill of or its employees
CISO in Softwarea year ago
On top of the ethical and acceptable use related consideration, we also added specific guidelines as to what information can employees provide to various LLMs (ChatGPT and derivatives, but also how to handle GitHub Copilot output). In short: when it comes to ChatGPT, our employees are instructed not to provide any company confidential information to the prompts, and whatever output the engine generates, it should be considered an input / starting point and never used externally without detailed review and, ideally, edits.
1. Ethical Use: We are committed to upholding ethical standards in the use of AI technologies, ensuring that our AI systems operate within legal frameworks and respect the rights and dignity of individuals.
2. Transparency: We strive to provide clear and understandable explanations regarding the capabilities and limitations of our AI systems, ensuring that users are aware when they are interacting with an AI-powered chatbot.
3. Privacy and Data Protection: We prioritize the protection of user data and privacy, adhering to applicable data protection laws and regulations. We take measures to ensure that user information is handled securely and responsibly, and we are transparent about our data collection, storage, and usage practices.
4. Bias Mitigation: We are committed to addressing and minimizing biases in our AI systems to provide fair and unbiased interactions. We continuously monitor and evaluate our algorithms to mitigate any unintended biases and discriminatory outcomes.
5. User Safety and Well-being: The safety and well-being of our users are paramount. We implement safeguards to prevent the misuse of our AI systems, protect against harmful content, and ensure that users are not subjected to inappropriate or abusive interactions.