Did you see the Center for AI Safety's latest statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Should we really be as concerned about AI-induced human extinction as we are about pandemics and nuclear war?
Sort By:
Oldest
Director of IT in Healthcare and Biotecha year ago
TLDR: Not yet, but we need to keep an eye on AI and what authority we transfer. Relevant data:
1) AI in Ukraine war to uncover individual Russian soldiers
https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare
2) Swarm technologies for drones and 6th generation fighters. Even older gen4 platforms can be retrofitted.
https://en.wikipedia.org/wiki/Sixth-generation_fighter
https://www.forbes.com/sites/davidhambling/2020/12/11/new-project-will-give-us-mq-9-reaper-drones-artificial-intelligence/
3) AI in radiology. Better than human!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6268174/
4) We don't know how the AI models work, even though we made them.
https://www.bbc.com/future/article/20230405-why-ai-is-becoming-impossible-for-humans-to-understand
https://www.standard.co.uk/tech/google-ceo-sundar-pichai-understand-ai-chatbot-bard-b1074589.html
5) Would a computer violate orders, disregard high priority data?
https://en.wikipedia.org/wiki/Stanislav_Petrov
6) Skynet. The scenario that keeps philosophers awake.
https://en.wikipedia.org/wiki/Skynet_(Terminator)
CIO in Telecommunicationa year ago
Its the humans we need to worry about. Nuclear war is a human invention. "Gain of Function" research on "enhancing" virus's - we did that.
I believe more practical and actionable questions anyone working on AI should to ask (and answer) is: What are the implications of our work? How far are we willing to take it? …and do a case by case risk assessment.