By Michael Shook
… Or, how I learned to stop worrying, and love AI.
Artificial Intelligence is getting a lot of attention these days. The Spectator magazine devoted its cover stories in the August issue to examining all things AI, and newspapers have been full of stories about the ongoing construction of data centers to support what some regard as our new partner in human being.
A brief recap of what the newest data centers comprise seems appropriate. For sheer size, the mind is boggled, and for consumption of energy, they dwarf almost anything that has come before. Typical is Amazon’s effort, under construction on 1,500 acres near South Bend, Indiana. It will be home to about 30 data computing centers, each larger than a football field (an area of about 31 acres just for the footprint). Every chip, every computer in the facility, will be linked to the others via hundreds of thousands of miles of fiber cable. The complex will use 2.2 gigawatts of electricity, enough to power one million homes (the equivalent output of Hanford’s nuclear power plant), and will use millions of gallons of water annually to cool the system (depletion of local aquifers is already an issue for those living near these centers).
The South Bend center is for a single customer; Anthropic, a leading force in AI technology. Anthropic’s goal at the center is to develop an artificial intelligence system that matches the human brain.
I ought not to be astonished at the hubris of such an undertaking. I’ve watched human behavior long enough to know that if we can do something, we will, consequences be damned. Still, this strikes me as a terrifically bad idea. Have none of these people read “Frankenstein?” Greek mythology? For goodness’ sake, even Grimm’s Fairy Tales?
The quest is a fool’s errand. Though our knowledge of how the brain functions has been greatly amplified in the last several decades, our comprehension of it is still rudimentary. This is especially so with the many intangible concepts humans ponder – the past, the future, emotions, etc. We don’t develop in glass jars or computer labs. How does one separate this marvel from the experiences, memories, psychological and emotional shocks, scars, wonderments, imaginative flights, and all the innumerable moments of living that constitute an actual brain? And doesn’t all this coalesce in what we call a mind, consciousness, a self … a soul? Our brains – we – are infinitely more than the simple sum of so many parts.
And Anthropic’s goal is to build a machine that is the equivalent of all that. It is a fantasy, built on greed, driven by an overwhelming lust for power. What they will arrive at (and what Amazon and Anthropic acknowledge in their descriptions of their hoped-for invention) is not so much a match for a brain, but a thing that can gather, store, collate, calculate, and retrieve staggering amounts of information at lightning speed. If Amazon is a dominant force now, wait until this system comes on line. Like every other AI developer, Mr. Bezos hopes to monopolize how and where AI is used, and, like every other, is racing to get to the gold mine first.
A question arises; will this monstrosity achieve consciousness? If not right away, will it, somehow over time, gain even a simple self-awareness (if such can be simple)? Is there anything in human history to suggest, even vaguely, that we are capable of creating something that will not be imbued with all the sins we carry as humans? No. Put aside for the moment that this venture is a glittering example of the worst, most dangerous of our sins – that of pride – what do we do if the thing becomes conscious? And if it does, is there any reason to believe that the machine will not then fight like hell against any and all odds for survival, to its last … electrical impulse?
These questions have already been answered, and not in a way that brings me comfort. According to a June 29th article for the online journal “Futurism,” in a series of tests run by Anthropic, AI programs consistently chose blackmail to stop a (fictitious) engineer who was going to shut down the program. All of the major AI programs were tested, and the top ones chose blackmail 80% to 96% of the time.
In another scenario, the AI programs chose to cancel an emergency alert system, and let said engineer suffocate in a room that was running out of oxygen, rather than rescue him, and allow him to delete the AI program. In still other experiments, AI tampered with the code intended to shut them down, while others copied themselves onto another drive to avoid their demise. These machines may not yet have what we call consciousness, but they are exhibiting survival instincts.
The researchers commented, “… the [AI] models reasoned their way” to these choices. Well, good, at least they were using reason.
Granted, these were tests, and the engineers are beavering away to fix the problem(s). But, obviously, the machines are already actively fighting to survive! What leads us to think a more sophisticated, more advanced, more calculating machine would do any different, and do it better?
“We’ll straighten it out,” the scientists say. The Oracle at Delphi was succinct about that: “Surety, then disaster.”