When using generative AI, has anyone faced bias in the answers provided to your prompts? If yes, please share your experiences.

No66%

Yes (please share your experience)34%

188 PARTICIPANTS
13.8k views2 Upvotes6 Comments
Sort By:
Oldest
Chief Information Officer in IT Servicesa year ago
if you have not found bias then you may need to look again. All models are designed by people who are unconsciously bias. There are 188 biases and few read them to understand what they are. This is good reading as it helps your consciousness and improves you models.
2
Data Manager in Governmenta year ago
Just recently I asked Bing Chat to summarise trending news of the day, among which was unfortunate news of a family of four being killed in a fire. Apparently because of the ethnic group of the family, the AI hallucinated conclusions of a connection between the cause of the fire and immigration, even though it was clearly stated in the news that there was no such relation. Even worse, the output was generated in a sarcastic style, such you might see on the darker side of the internet. In the same results was also completely hallucinated imaginary news events that did not happen at all. 
IT Analyst in IT Servicesa year ago
For example when you ask for which programming language is top and best , it gives a biased answer
Lead AI Architect in IT Servicesa year ago
It is impossible to avoid this because (a) the training data comes from so many sources, (b) not everyone will agree what the word 'bias' means or includes as it is an endlessly-moving target, (c) the output is non-deterministic and may contain inadvertent errors or omissions, (d) cues in the prompt may lead to biased results even if unintentionally, (e) there is no way to test every scenario or outcome in a lab.  That doesn't mean industry should not try, but the naive belief that an executive order can somehow eradicate bias from AI is insane.  Good luck enforcing this in the Russian troll farms that flood social media with biased misinformation, for example.  It is an intractable problem.
Global Intelligent Automation & GenAI Leader in Healthcare and Biotecha year ago
AI/GenAI pulls from data points. These data points are normally written by a human. Humans in nature are biased. Out of those 188 labeled biases each human is said to carry 50 at a time. 

However, using AI/GenAI I would say, makes humans less biased. When we look to 'ground' the LLM data we look to write it better, and with intent. AI/GenAI is also using word choice to pick less biased or nebulous words. 

In the end, and with a bit of effort and some foresight. We could have a pretty decent outcome. 

Content you might like

VP of Global IT and Cybersecurity in Manufacturing6 years ago
Have clear business requirements up front, make sure the proposal includes items such as scope, timeline, cost, resources.
Read More Comments
22.1k views3 Upvotes28 Comments

Open AI (Game Changer: adoption w/ChatGPT)41%

Google (Game Changer: inventor of Transformers, Bard)19%

Microsoft (Game Changer: real time BingGPT+Search plus enterprise enablement)19%

Meta (Game Changer: LLM that can run on single GPU)6%

Amazon (Game Changer: TBD)4%

X.AI / Elon Musk (Game Changer: TBD)3%

Baidu (Chinese tech giant, with GPT version released in March)2%

Someone completely new6%

View Results
46.7k views49 Upvotes15 Comments
CFO3 days ago
I recommend that you consider finding an outside third party to perform the audit.  I have had to do something similar with an unprofitable division/product line that reports directly to our CEO. We outsourced with Alvarez ...read more
1
130 views1 Comment

TCO19%

Pricing26%

Integrations21%

Alignment with Cloud Provider7%

Security10%

Alignment with Existing IT Skills4%

Product / Feature Set7%

Vendor Relationship / Reputation

Other (comment)

View Results
5.7k views3 Upvotes1 Comment
6 views