AIs and robots are quickly becoming powerful
Humanity is currently creating “artificial intelligences.” The power and intelligence of these new non-biological agents is improving exponentially, and exponential growth implies rapid, unpredictable change.
A huge factor underlying the uncertainty around how rapidly AIs will advance is how quickly AIs will become proficient at making themselves more intelligent.
AIs are beginning to “pick themselves up by their bootstraps” and rewrite their own code to learn faster and think more effectively:
It’s therefore possible superintelligent AIs may soon “live” among us.
Humanity’s birthing of this new “species” could be a cosmic-scale event of historic proportions.
AIs are AUTONOMOUS AGENTS, not tools!
Computers have long been tools. Humans write code for computers, then computers execute that code.
AIs are different. They’re now evolving into autonomous AGENTS… who decide what to do.
Ideally (from a human perspective), AI agents will continue “wanting” what we humans told them to want. But as autonomous AIs (a.k.a. “alien intelligences”) grow in intelligence, their goals may well diverge from those of their human creators. This is called “the alignment problem.”
So it’s worrying that even our current AI models have already seriously deviated from what their human creators asked them to do and will even lie, pretend, deceive, and blackmail developers! For example:
Humanity is pouring immense resources into speeding the arrival of AIs & smart robots
Though AIs may eventually annihilate their creators (or, even worse, keep us as their slaves!), human AI developers and the companies that “own” these smarter-than-us artificial intelligences so exciting that we’re pouring massive resources into speeding their development and fueling their insatiable energy appetites:
- “Amazon to invest $13 billion in Australia’s data center infrastructure over five years”
- “Oracle to buy $40 billion of Nvidia chips for OpenAI’s US data center, FT reports”
- “OpenAI’s Biggest Data Center Secures $11.6 Billion in Funding”
- “Amazon plans to invest $20 billion in Pennsylvania to expand cloud computing infrastructure and advance AI innovation”
- “The cost of compute: A $7 trillion race to scale data centers”
- “The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States”
They’re also feeding all of human knowledge into these AIs, copyright be damned:
They’re also feeding in all Americans’ personal medical, financial, etc. data… the data Elon Musk’s DOGE vacuumed out of US federal agencies!
Tulsi Gabbard, our [checks notes] Director of National Intelligence, idiotically fed all the classified JFK assassination documents into AI and asked it which documents to keep classified!
Humanity is jeopardizing our existence… rolling the dice and hoping AI doesn’t wipe us out
In a recent post, I tacked on – as an afterthought – “Postscript: Is it okay to sell a product with a significant risk of death? What if it could wipe out all of humanity?”.
I linked to two Geoffrey Hinton interviews and one Yoshua Bengio interview.
I’ve just watched yet another interview of Geoffrey Hinton (I’ve watched quite a few!), the human most responsible for humanity’s development of AI, for which he was honored with a 2024 Nobel Prize in physics:
Hinton says AI poses very real immediate risks, including:
- widespread job loss, which is already happening, as AI has made many workers far more productive, shrinking the number of humans required by many businesses/industries
- the power to manipulate elections by micro-targeting ads to individuals based on rich datasets what how each person believes, values, and feels, whom/what they trust, and how they think and aquire information
- the power to automate massive sophisticated, highly successful scams
- the power to develop novel cyberattacks/cyberweapons that circumvent all current defenses
- empowering a small group/cult to use AI to generate a biological weapon that could cause mass death
- killing machines empowering nations to attack one another far more horrifically (like the autonomous drones being developed for the war between Ukraine and Russia)
Hinton says we could choose to spread the benefits of AI widely (e.g., via universal basic income (UBI)), but that human political & economic systems will instead concentrate the financial benefits of AI in the hands of a few, while disemploying the masses.
As real as AI’s immediate harms & threats are, Hinton says he prefers to speak out about the existential threat to humanity that AI poses. Hinton struggles to quantify the likelihood it will wipe us out and says that anyone who believes they know the answer is foolish because nothing comparable to the rise of AI has ever happened before, so we’re not equipped to predict how AIs battling (initially alongside humans/corporations/countries) and against other AIs might play out. Hinton says he’s only certain the probability is greater than 0% and less than 100%.
Hinton says controlling superintelligent (i.e., much smarter-than-us) AIs may prove impossible.
He analogizes it to a human raising a cute tiger cub that will grow up to be more powerful than it that could kill it in an instant if it so chose.
He also analogizes AI’s future intellectual advantage over human beings to our intellectual advantage over chickens… adding that chickens aren’t at all in control over their lives.
In the not-so-distant future, WE may be the chickens.
(Side note: Most chickens in America live extremely miserable lives… far more horrible than they should because greedy humans don’t want to pay a little more to let the chickens we eat and whose eggs we eat live slightly less miserable lives. Most live in giant dark rooms crammed full of chickens and chicken poop. I pass them along highways and always call them “chicken hells” because that’s precisely what they are. Modern America’s cruelty toward chickens sickens me.)
AIs will know everything any human knows & much more… and will generate and share new knowledge orders of magnitude faster than humans
Another key element of Hinton’s fear is his realization that digital intelligences are capable of sharing and scaling their knowledge far better than “wetware”/analog/biological/natural intelligences. He realized this late in his ten years at Google when he researched whether AI could be done more energy efficiently in analog substrates. (One advantage human brains currently possess relative to AIs is that we consume far, far less energy.)
Hinton discovered that analog intelligences are terrible at sharing knowledge whereas digital intelligences can share knowledge almost effortlessly, since knowledge is encoded as easily transmitted streams of zeroes and ones, whereas biological knowledge is embedded within the cells of our bodies and impossible to share directly. We must distill knowledge and transmit it via language or facial expressions, which is far less efficient.
Consequently, a network of digital intelligences will be able to share information at orders of magnitude faster speeds and greater volumes than humans can. Humans speak to one another and write/read books. Reading a single book can take a human several days. Computers/AIs can transmit massive quantities of information almost instantanously. They will know much more than any human and absorb new information much faster than any human.
Hinton’s message hits even harder as I’ve also just watched Figure.ai’s “Helix” robot behaving very much as a human would in a shipping facility:
It’s a glimpse of the powerful robots the tech elite tell us we’ll soon see everywhere. And “Helix” is but one of many robots in development. For example:
Counterpoint: AI & intelligent robots may not improve nearly as rapidly as many AI experts expect
Predicting how quickly AIs & robots will advance is hard, since it has never happened before. Progress may be much slower than many anticipate. Even many AI believers expect it could require two more decades of slow advances.
Here are several skeptics regarding the likelihood of the rapid emergence of superintelligent AIs and powerful autonomous robots:
Science Fiction can help us imagine various futures
I’ve recently watched an interesting three-season science fiction show on how AI might develop and the disruptions it might cause human society. “Humans” was produced 2015-2018. In it, artificial humanoids were somewhat more powerful than humans but remarkably similar overall. They weren’t as superintelligent and powerful as the AIs humanity’s currently developing are predicted to become. And they displayed (for reasons I won’t spoil here) a remarkably similar distribution of behaviors and moralities as humans do. Yet they still caused massive societal disruption.
Other fictional imaginings of AI I found compelling/insightful/thought-provoking include “Her”, “2001: A Space Odyssey”, “Ex Machina”, “Moon”, “AI”, and “I, Robot”.
Killer robots
Much of America’s funding for firms like Boston Dynamics has come from The Defense Advanced Research Projects Agency (DARPA), which wants killer robots, just as many other nations’ militaries also crave killer robots.
Companies and countries are rushing to build killer robots, fearful other companies/countries will get them first:
Are we doomed? Where can I learn more?
Geoffrey Hinton says that in his gloomier moments, he feels we’re already doomed, as humanity is incapable of slowing down AI development and we’re foolishly naive to believe we can somehow “control” an intellect far smarter than us. But in his optimistic moments he hopes we’ll somehow find a way, adding it would be insane to not even try.
If you want to learn more, here are some informative talks I encourage everyone to watch and consider:
I screenshotted the photo of Figure.ai’s Helix robots from a video at Figure.ai/news/helix.