What do you think are the risks associated with singularity in AI advancements? I.e. AI becoming more intelligent than its creators accompanied with a point of no return.

2.2k views4 Comments
Sort By:
Oldest
Director of IT in IT Services5 months ago
The risks of AI singularity include loss of control and unforeseen consequences.
1
Chief Data Officer in Software5 months ago
The risks are profound, potentially existential, and largely ignored.  

I recommend the book 'Our Last Invention" by James Barrat.  The perspectives shared in this book made me completely change my views on the issue of if we should be seriously considering AGI as a necessary step in the evolution of AI.  
1
Mission Diplomatic Technology Officer in Government5 months ago
Without asking a Gen AI risks might include access to more advanced technologies than we can safely wield, workforce disruptions at a velocity politics can’t govern, a further technological divide as mass of digital illiterate that is expressed economically, or an entity seeking energies and organizing information at such ravenous capacity that we become nothing more than materials of a design. Maybe a utopia. Or some blended alchemy of several of the above.

Lots of great dystopian science fiction’s accumulated in this topic. The 1970s “Colossus” by DF Jones deals with a developed AI that once sentient recognizes more advanced threats in the solar system. Gulp. And yes, it does rule humanity towards trying to defend against an AI tonAI battle, but the one human created was less cruel than the other AI already in the system. A great non-fiction is “Our Last Invention.” This covers in depths the many accumulated risks. Also gulp.

Sophisticated civilization appears to harmonize around a necessary fear of something. This spurs change. In no chronological order; god, acid rain, witches, monsters under bed, nuclear apocalypse, environmental catastrophe, ozone holes, death, aliens, collider black holes, crisper… The list is not exhaustive.

Some of these resulted in human collaborations towards observable mitigations. Some were illusionary. However, our ability to see trends, collaborate, and change behavior has also assisted in better outcomes and better connected groups.

We, I expect, get to see this one at a pace of velocity with no twin in the arms race or past technology. Everyone wants to build this one first. And, it only takes one.
1 1 Reply
lock icon

Please join or sign in to view more content.

By joining the Peer Community, you'll get:

  • Peer Discussions and Polls
  • One-Minute Insights
  • Connect with like-minded individuals
Chief Data Officer in Software5 months ago

I am glad to see we both called out 'Our Last Invention' in our responses.  When it comes to anything post-singularity, there's one thing I think we can all agree on:  we don't know exactly what will happen.  And since we're talking about an intelligence far superior to own, there's no way we *can* know with any certainty what will happen, since the past will in no way help to predict the future.  It's all speculation.  What that means is the science fiction may be equally as valid as the non-fiction.  Either way, the idea we are in an unstoppable arms race towards a future state where we cannot reasonably model or forecast what the impacts will be to humanity, for better or worse, seems entirely reprehensible to me.  

1

Content you might like

Director of Data4 days ago
In our implementation of Master Data Management (MDM), we primarily adopted a centralized approach using Microsoft Dataverse. This was key to resolving data quality and consistency issues in our Power Platform solutions, ...read more
1
61 views1 Upvote1 Comment
Director of Data4 days ago
In my opinion, Advancements in AI and related data field have significantly enhanced our Master Data Management (MDM) strategy by automating data quality, integration, and governance processes. AI algorithms help identify ...read more
1
32 views1 Comment
243 views2 Upvotes

Extremely11%

Moderately70%

Slightly19%

Not at all

View Results
227 views