With help from Derek Robertson
The following is an excerpt of an essay published this weekend in POLITICO Magazine by Charles Jennings, the former CEO of an AI company partnered with the California Institute of Technology and NASA’s Jet Propulsion Lab. Listen to his conversation with POLITICO Tech’s Steven Overly below.
Nine years ago, in a commercial AI lab affiliated with Caltech, I witnessed something extraordinary.
My colleague Andrej Szenasy was wrapping up a long day’s work training NeuralEye, an AI initially developed for the Mars Rover program, and I was a few cubicles away, plowing through NeuralEye’s test data. “Hey, check this out!” he shouted.
Our lab’s mission was to train NeuralEye to see as humans do, with the ability to recognize things, not just record them as a camera does. NeuralEye was built originally to discern different soil types on Mars, but we were teaching it to identify Earth’s inhabitants: animals, plants and individual humans. We believed AI could greatly improve face recognition, so that it could be used in cybersecurity, replacing passwords.
The first step in teaching NeuralEye to identify people was to get it to match various photos of a single person’s face. Typically, one photo would reside in NeuralEye’s training dataset of 14,000 faces; another — a different photo of the same person — would serve as the “prompt.” When NeuralEye successfully matched these two photos out of the thousands in its dataset, it got the digital equivalent of a doggie treat. In AI, this method is known as reinforcement learning, and with NeuralEye, it was working.
That night in the lab, for fun, Szenasy had prompted NeuralEye with a photo of his son, Zachie. Szenasy’s face was in NeuralEye’s dataset; Zachie’s wasn’t. Zachie, who has Down Syndrome, was a sweet 8-year-old. Round face, thick glasses, mop of black hair. Dad was tall and thin, no glasses, blonde with a receding hairline. If there was a physical resemblance between them, I couldn’t see it.
Szenasy sat me in front of his computer and again prompted NeuralEye with a photo of Zachie’s face. NeuralEye spun through its cache of stored faces looking for Zachie —and up popped a photo of Szenasy. Without any specific instruction, NeuralEye had somehow picked up a faint family resemblance. Out of those 14,000 faces, it selected Szenasy’s face as the third closest match with Zachie’s.
The next morning I phoned the AI engineer who’d written NeuralEye’s algorithm while at the Jet Propulsion Lab, home of the Mars Rover program. I asked him how NeuralEye could have seen a connection between Zachie and his father. He waxed philosophical for a few minutes, and then, when pressed, admitted he had no clue.
That’s the thing about AI: Not even the engineers who build this stuff know exactly how it works.
This Zachie episode took place in 2014, a time in AI that now seems prehistoric. Training datasets then had records in the thousands, not hundreds of millions, and large language models like GPT were just a gleam in Sam Altman’s eye. Today, AIs are writing novels, passing the bar exam, piloting warfighter drones. According to a recent University of Texas study widely reported on cable news, an AI in Austin is effectively reading minds: After an in-depth CAT-scan and 16 hours of one-on-one training with someone, it can read neural brain patterns and suggest what the subject is thinking with surprising accuracy. But in those halcyon AI days nearly a decade ago, we in our small lab were amazed that NeuralEye could do something as basic as spot a link between Szenasy and his son.
While the best AI scientists obviously know a great deal about AI, certain aspects of today’s thinking machines are beyond anyone’s understanding. Scientists cleverly invented the term “black box” to describe the core of an AI’s brain, to avoid having to explain what’s going on inside it. There’s an element of uncertainty — even unknowability — in AI’s most powerful applications. This uncertainty grows as AIs get faster, smarter and more interconnected.
The AI threat is not Hollywood-style killer robots; it’s AIs so fast, smart and efficient that their behavior becomes dangerously unpredictable. As I used to tell potential tech investors, “The one thing we know for certain about AIs is that they will surprise us.”
When an AI pulls a rabbit out of its hat unexpectedly, as NeuralEye did on a small scale with Zachie, it raises the specter of runaway AI — the notion that AI will move beyond human control. Runaway AIs could cause sudden changes in power generation, food and water supply, world financial markets, public health and geopolitics. There is no end to the damage AIs could do if they were to leap ahead of us and start making their own arbitrary decisions — perhaps with nudges from bad actors trying to use AI against us.
Yet AI risk is only half the story. My years of work in AI have convinced me a huge AI dividend awaits if we can somehow muster the political will to align AI with humanity’s best interests.
With so much at stake, it’s time we in the United States got serious about AI policy. We need garden variety federal regulation, sure, but also new models of AI leadership and governance. And we need to consider an idea that would have been unthinkable a year ago.
We need to nationalize key parts of AI.
Read the rest of Charles’ essay here.
China is getting aggressive with its metaverse development, leaving some in the West worried the 3D digital world might follow the same draconian rules as China’s 2D one.
That’s according to a report from POLITICO’s Gian Volpicelli, who writes about a series of proposals from Chinese telecom China Mobile that would institute a social credit-like system in the virtual world. They recommend keeping track of “natural” and “social” characteristics of the metaverse’s users, including occupations and their visual “identifiable signs,” all of which they say should be stored permanently and shared with law enforcement.
For proponents of a more open, less surveillance-friendly internet, that’s alarming. More alarming for them is that Gian spoke with an expert contributing to the International Telecommunication Union, the United Nations’ telecoms agency that sets global rules for how technology works, who says that Chinese organizations are making a bid for mastery of metaverse rulemaking by filing more proposals than the U.S. or Europe.
“They are trying to play the long game,” they told Gian. “When the metaverse comes around, they’ll say, ‘these are the standards.’” — Derek Robertson
What goes on behind the scenes to make ChatGPT and its ilk seem so… well, human?
John P. Nelson, a postdoctorate fellow in ethics and societal implications of artificial intelligence at the Georgia Institute of Technology, wrote in an op-ed for The Conversation about the role humans play in making large language models work. It’s not just scraping human-generated text for data: It’s in the massive amount of feedback humans provide as those models are trained, a process that can be fraught and even psychologically torturous when reviewers are confronted with hateful or violent text.
The other side effect he notes of the massive amount of human labor involved in making chatbots work is that there are certain things that they just can’t do without us.
“They can’t evaluate whether news reports are accurate or not. They can’t assess arguments or weigh trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie,” Nelson writes. “If the common wisdom on some topic changes – for example, whether salt is bad for your heart or whether early breast cancer screenings are useful – they will need to be extensively retrained to incorporate the new consensus.” — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Credit: Source link