If AI Thinks, Or Approximates Thinking, Who Are We?

Education & Catastrophe 91

CoPilot

The title of this issue of Education & Catastrophe is a line from the book The Age Of AI And Our Human Future, co-authored by Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher. I came across a New York Times article titled A.I. Is Learning What It Means to Be Alive just as I was finishing the book and noticed how a Stanford experiment to get AI to teach itself biology and make discoveries about how cells work is a great example of the rewards and risks of AI proliferation, in particular humanity's concept of reality.

First, a TLDR of the Stanford experiment. Researchers trained a ChatGPT-like AI on raw data about millions of real cells and their chemical and genetic makeup. It took the AI six weeks to discover the Norn cell, a rare kidney cell that makes the hormone to produce red blood cells. It took humans 134 years to make the same discovery.

The software is one of several new A.I.-powered programs, known as foundation models, that are setting their sights on the fundamentals of biology. The models are not simply tidying up the information that biologists are collecting. They are making discoveries about how genes work and how cells develop.

As the models scale up, with ever more laboratory data and computing power, scientists predict that they will start making more profound discoveries. They may reveal secrets about cancer and other diseases. They may figure out recipes for turning one kind of cell into another.

A vital discovery about biology that otherwise would not have been made by the biologists — I think we’re going to see that at some point,” said Dr. Eric Topol, the director of the Scripps Research Translational Institute.

Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher in The Age Of AI

There are a couple of other examples in the book of AI doing things that are beyond human imagination (and possibly human understanding). Google-owned Deepmind's world-beating chess AI programme AlphaZero executed moves humans had not considered at all, to the extent chess grand master Garry Kasparov declared: "chess has been shaken to its roots by AlphaZero." AlphaFold, a programme that uses reinforcement learning to model proteins without requiring human expertise, has more than doubled the accuracy of protein folding from 40 percent to 85 percent, enabling biologists to revisit old questions they had been unable to answer and pose new questions about battling pathogens.

In the examples above, we see the benefits of using AI to make discoveries in art (chess) and science (biology), giving rise to potential solutions that create new art forms or cure diseases. Key to AI making discoveries hitherto unknown to humans is AI's non-human understanding of the world and its ability to bring a divergent-from-human concept of the world to inquiry in art and science.

What role, then, do humans play when AI is opening new horizons and allowing us to navigate domains in ways that weren't previously possible? The temptation is to leave it to AI to discover new solutions, new domains, new frontiers, but as many people are well aware, AI is not perfect and regularly get things wrong. More importantly, with AI it's easy to see the outcome (AlphaZero making a move that runs counter to everything human players learnt about chess), but extremely difficult (and often impossible) to understand how and why the AI did what it did.

Confronted with technologies beyond the comprehension of the non expert, some may be tempted to treat AI's pronouncements as quasi-divine judgements. Especially in the case of AGI (artificial general intelligence), individuals may perceive godlike intelligence - a superhuman way of knowing the world and intuiting its structure and possibilities. But deference would erode the scope and scale of human reason.

Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher in The Age Of AI

For centuries humans have, through reason and faith, made sense of the self, of other people, and of the world. In the process, we have placed ourselves at the center of universe, shared stories of how the world was created, and wrote history based on the human experience and our understanding of the world. In addition to reason and faith, there is now a third way by which to know the world.

In an era in which reality can be predicted, approximated, and simulated by an AI that can assess what is relevant to our lives, predict what will come next, and decide what to do, the role of human reason will change. With it, our senses of our individual and societal purposes will change too.

For humans accustomed to agency, centrality, and a monopoly on complex intelligence, AI will challenge self-perception.

Not only will we have to redefine our roles as something other than the sole knower of reality, we will also have to redefine the very reality we thought we were exploring. And even if reality does not mystify us, the emergence of AI may still alter our engagement with it and with one another.

Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher in The Age Of AI

Our understanding of how AI perceives and reacts is inversely co-related with what AI is able to do, and AI is getting better (at alarming speed) at completing a wide array of cognitive tasks. Experts disagree on how far away we are from artificial general intelligence (AGI), widely defined as the ability of AI to complete any intellectual task humans are capable of, but most experts view AGI as an eventuality. An AI/AGI containment strategy deserves an entire essay (probably even an entire book), but suffice to say doing nothing is not a strategy. We need an AI code of ethics to govern how individuals, organisations and nation-states engage and apply AI. And even that is not enough.

The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity.

Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher in The Age Of AI