Over the past few years, A.I. (artificial intelligence) has become a part of our everyday lives. From customer service chatbots to Roomba vacuums to our phones’ facial recognition feature … you probably come into contact with some form of A.I. every single day.

Which begs the question: As A.I. gets smarter and smarter, what happens when it gets, like, really REALLY smart? Will the robots take over?

And the answer to that question is: probably.

Woah Neo Matrix GIF
Oh, sorry, did we blow your mind? (Source: Tenor)

It’s all part of what’s called the “technological singularity,” or more commonly, just “the singularity.” And futurists are pretty certain we’re headed towards it.

Ok, but what exactly is the singularity?

The singularity is when A.I. gets to a point where it’s either self-aware or can improve itself faster than we can improve it. Technological growth at that point will be “uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.”

In other words, at some point, A.I. will become much more intelligent than humans and will realize it can make and improve itself way better than us. At that point, we could potentially lose all control of it. NBD.

The first recorded mention of the technological singularity was from mathematician and next-level genius John von Neumann. In 1958, von Neumann posited that we were "approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

In 1993, sci-fi writer Vernor Vinge wrote that within 30 years, we'd have "the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Futurist/inventor Ray Kurzweil is also a leading thinker on the concept and takes it a step further: he predicts that by the year 2045, "we will multiply our effective intelligence a billion-fold by merging with the intelligence we have created."

Ex Machina movie image
Can you guess which one is A.I.? The answer might surprise– wait, no it won't. (Source: Den of Geek)

You mean we will become robots?

Kind of – more like cyborgs.

Kurzweil thinks that humans will become so integrated with A.I., we will become part-human, part-machine. Imagine brain-technology interfaces (like Elon Musk’s Neuralink), which could be used to enhance memory, cognitive function and brain processing power. But A.I. could also be used to enhance our physical bodies as well; for example, with the power to predict the structure of every protein our body makes, A.I. could be used to create medications that help our bodies heal themselves much more quickly.

Of course, many singularity predictions are less about what humans become and more about what A.I. becomes. Many futurists believe that, as A.I. gets more advanced, it will get so intelligent that humans will hold it back – so it will find ways to improve and replicate itself outside of human control.

And at that point, integrating A.I. into our bodies might be the only way we can still stay dominant as a species. Or survive, or whatever. 🥴

They Just Want To Dance
The best case scenario? (Image by Navied MahDavian via The New Yorker Cartoons)

But what about the “gray goo?”

A lot of the chatter around the singularity has to do with nanotechnology (the manipulation of matter “on a near-atomic scale” to build things). For example, futurist Kim Eric Drexler believes we will develop “assemblers:” tiny machines that will be able to produce materials molecule-by-molecule. This could be a huge positive advancement for certain fields like medicine, agriculture, food technology, etc.

But it could also lead to a nightmare scenario known as “gray goo.” This is where those assemblers go rogue and focus solely on making more and more of themselves. They go about this by devouring all the natural resources on the planet, and we end up with an earth covered in tiny nanobots (hence: “gray goo”).

Handmade slime
Our destiny? (Source: Instructables)

But maybe we’re not doomed?

Since we’re already thinking about (and hopefully preparing) for the singularity, some futurists say that we have a decent chance of controlling technological development in a way that protects humans. For example: we could try to program the nanobot assemblers to have limitations and prevent the nightmare “gray goo” scenario from happening.

Futurists like Thomas Frey prefer to focus on the tremendous potential that A.I. could offer humanity, even if technology does indeed become smarter than us. With the properly programmed limitations, they say, we don’t have to be afraid of technology: instead, we can focus on finding ways to collaborate with it. For example: A.I. could help us solve innumerable problems that humans might lack the capacity (or time) to solve. And by integrating aspects of A.I. into our bodies, we will likely be able to extend the human lifespan and make leaps in evolution that would be highly advantageous for our species.

So either the robots will kill us all, or they’ll make life super-awesome for us. Feeling comforted yet?

____________

Interact With Humans While You Can:

people playing the deep game at a table


Leave a comment

×