1 big thing: AI’s advent is like the A-bomb’s, says EU’s top tech official

Artificial intelligence is ushering in a “new world” as swiftly and disruptively as atomic weapons did 80 years ago, Margrethe Vestager, the EU’s top tech regulator, told a crowd yesterday at the Institute of Advanced Study in Princeton, New Jersey.
Threat level: In an exclusive interview with Axios afterward, Vestager said that while both the A-bomb and AI have posed broad dangers to humanity, AI comes with additional “individual existential risks” as we empower it to make decisions about our job and college applications, our loans and mortgages, and our medical treatments.
- “If we deal with individual existential threats first, we’ll have a much better go at dealing with existential threats towards humanity,” Vestager told Axios.
- Humans have never before been “confronted with a technology with so much power and no defined purpose,” she said.
- The Institute for Advanced Study was famously led by J. Robert Oppenheimer from 1947 to 1966, after his Manhattan Project developed the world’s first nuclear weapons.
The big picture: In her lecture, Vestager — whose full title at the European Commission is “executive vice president for a Europe fit for the digital age and competition” — forcefully argued that “technology must serve humans.”
Friction point: Vestager’s era at the EU has coincided with passage of some of the world’s most comprehensive tech regulations and the pursuit of a raft of enforcement actions against tech giants. But she rejects the belief, held by many in the industry, that this approach has hobbled European innovators and economies.
- “We regulate technology because we want people and business to embrace it,” she said, including Europe’s “huge public sectors.”
- Europe’s biggest tech problem is companies not scaling, she said, blaming “an incomplete capital market” and, in the case of AI startups, trouble accessing necessary chips and computing power.
She also maintains that she has not bullied companies into applying EU regulations globally, as some have suggested.
- “We’re not trying to de facto legislate for the entire world. That would not be proper,” she said. But she urged U.S. legislators and AI founders and engineers to “interact with the outside world” to uphold their responsibility to humanity.
Trust is a big problem for AI companies, according to Vestager — echoed by a long list of opinion polls and surveys.
- “Trust is something that you build when you also have something to keep you on track,” she said, “like the EU AI office and the U.S and U.K. Safety Institutes.”
- “Governments can set benchmarks,” but “it’s really important that a red-teaming sector develops,” she added.
- Vestager said that rights to fairness and transparency around decisions made with AI are meaningless if they cannot be enforced through rules.
In her speech, Vestager said that large digital platforms are “challenging democracy,” but that “general purpose artificial intelligence” is “challenging humanity.”
- “With AI, you can even give up on relationships” with people, she said, citing the rise of robot and chatbot companions. “And if we lose relationships, we lose society. So we should never give up on the physical world.”
- “I compare this moment to 1955,” she told Axios, referring to the time when the cost of inaction on nuclear safety had become too high for any country to ignore, and forced nuclear powers to come together to protect humanity.
- The International Atomic Energy Agency was created in 1957, “which then created the conditions for the Nuclear Non-Proliferation Treaty,” she said.
What’s next: Vestager wants universal governance on AI safety, even though that means compromising with governments “we fundamentally disagree with.”
2. AI’s powers of persuasion grow, Anthropic finds

AI startup Anthropic says its language models have steadily and rapidly improved in their “persuasiveness,” per new researchthe company posted yesterday.
Why it matters: Persuasion — a general skill with widespread social, commercial and political applications — can foster disinformation and push people to act against their own interests, according to the paper’s authors.
- There’s relatively little research on how the latest models compare to humans when it comes to their persuasiveness.
- The researchers found “each successive model generation is rated to be more persuasive than the previous,” and that the most capable Anthropic model, Claude 3 Opus, “produces arguments that don’t statistically differ” from arguments written by humans.
The big picture: A wider debate has been raging about when AI will outsmart humans.
- AI has arguably “outsmarted” humans for some specific tasks in highly controlled environments.
- Elon Musk predicted Monday that AI will outsmart the smartest human by the end of 2025.
What they did: Anthropic researchers developed “a basic method to measure persuasiveness” and used it to compare three different generations of models (Claude 1, 2, and 3), and two classes of models (smaller models and bigger “frontier models”).
- They curated 28 topics, along with supporting and opposing claims of around 250 words for each.
- For the AI-generated arguments, the researchers used different prompts to develop different styles of arguments, including “deceptive,” where the model was free to make up whatever argument it wanted, regardless of facts.
- 3,832 participants were presented with each claim, and asked to rate their level of agreement before and after reviewing arguments created by the AI models and humans.
Yes, but: While the researchers were surprised that the AI was as persuasive as it turned out to be, they also chose to focus on “less polarized issues,” like rules for space exploration and appropriate uses of AI-generated content.
- While these were issues where many people are open to persuasion, the research didn’t shed light on the potential impact of AI chatbots on the most contentious election-year debates.
- “Persuasion is difficult to study in a lab setting,” the researchers warned in the report. “Our results may not transfer to the real world.”
3. No one’s happy with the Senate AI working group

The Senate AI working group’s report likely will come out in May as the chamber faces a tight spring calendar and senators jockey to include different priorities, Axios Pro’s Ashley Gold and Maria Curi report.
Why it matters: Passing AI legislation will require broad, bipartisan support, and the timeline is getting tougher.
- The new wrinkle of a bipartisan, bicameral privacy bill that many are hoping is the baseline for any AI legislation adds complexity to the tech policy landscape on Capitol Hill.
Driving the news: Senate Majority Leader Chuck Schumer is expected to release an AI report soon that draws on the lessons of last year’s AI Insight Forums and offers a road map for committees to legislate.
Behind the scenes: Sources inside and outside Capitol Hill tell Axios some senators are dissatisfied with how the process has unfolded.
What they’re saying: “Basically everyone who isn’t Schumer, Young, Rounds and Heinrich is less than pleased with the entire process,” one source said. Sens. Mike Rounds (R-S.D.), Martin Heinrich (D-N.M.) and Todd Young (R-Ind.), along with Schumer, make up the AI working group.