When using generative AI, has anyone faced bias in the answers provided to your prompts? If yes, please share your experiences.
No66%
Yes (please share your experience)34%
188 PARTICIPANTS
Sort By:
Oldest
Chief Information Officer in IT Servicesa year ago
if you have not found bias then you may need to look again. All models are designed by people who are unconsciously bias. There are 188 biases and few read them to understand what they are. This is good reading as it helps your consciousness and improves you models.Data Manager in Governmenta year ago
Just recently I asked Bing Chat to summarise trending news of the day, among which was unfortunate news of a family of four being killed in a fire. Apparently because of the ethnic group of the family, the AI hallucinated conclusions of a connection between the cause of the fire and immigration, even though it was clearly stated in the news that there was no such relation. Even worse, the output was generated in a sarcastic style, such you might see on the darker side of the internet. In the same results was also completely hallucinated imaginary news events that did not happen at all. IT Analyst in IT Servicesa year ago
For example when you ask for which programming language is top and best , it gives a biased answerLead AI Architect in IT Servicesa year ago
It is impossible to avoid this because (a) the training data comes from so many sources, (b) not everyone will agree what the word 'bias' means or includes as it is an endlessly-moving target, (c) the output is non-deterministic and may contain inadvertent errors or omissions, (d) cues in the prompt may lead to biased results even if unintentionally, (e) there is no way to test every scenario or outcome in a lab. That doesn't mean industry should not try, but the naive belief that an executive order can somehow eradicate bias from AI is insane. Good luck enforcing this in the Russian troll farms that flood social media with biased misinformation, for example. It is an intractable problem.Global Intelligent Automation & GenAI Leader in Healthcare and Biotecha year ago
AI/GenAI pulls from data points. These data points are normally written by a human. Humans in nature are biased. Out of those 188 labeled biases each human is said to carry 50 at a time. However, using AI/GenAI I would say, makes humans less biased. When we look to 'ground' the LLM data we look to write it better, and with intent. AI/GenAI is also using word choice to pick less biased or nebulous words.
In the end, and with a bit of effort and some foresight. We could have a pretty decent outcome.