Newcomer

Newcomer

Anthropic’s Throwdown on AI Safety Almost Got Lost in a Crazy News Cycle. It Demands Our Attention.

Plus, a creative tools startup raises $30 million & San Francisco keeps its AI lead

Jonathan Weber's avatar
Madeline Renbarger's avatar
Jonathan Weber and Madeline Renbarger
Jan 30, 2026
∙ Paid
The Week in Short

Dario Amodei goes deep on the AI future & its risks. OpenArt raises $30 million. The Bay Area extends its startup dominance. Robotaxi firm Waabi closes a 9-figure round, with help from Uber. Elon Musk pulls back on Tesla cars, looks to SpaceX-xAI merger ahead of the biggest IPO in history. Tech leaders speak out against ICE, but employees are louder. The new TikTok gets off to a rough start. OpenAI eyes new heights with $100 billion fundraise.


The Main Item

Dario Amodei Has Some Simple & Obvious Truths About AI Risks

By any measure of entrepreneurial achievement, Anthropic CEO Dario Amodei is on top of the world. With Claude Code, he has the hottest product in what’s destined to be the biggest and most powerful industry in history, and his 5-year-old company is closing in on a $350 billion valuation. Defying the odds, Anthropic has more than held its own against OpenAI and the daunting phalanx of Big Tech pretenders to the AI throne.

Given that, we can trust that Amodei has some insight into generative AI. And what’s most striking about the lessons he has learned from this, described in a seminal 19,000 word essay this week, is his modesty in the face of the awesome power of the technology.

He doesn’t pretend to have a clear view of what the future will bring. Instead, he speaks in the scientific language of probabilities as he outlines the many, many potential risks to humanity posed by the steady march of generative AI. His tone and approach stand in dramatic contrast to the simplistic certitudes we hear so often from those on both sides of the AI regulation debate.

Amodei believes it likely — not a sure thing, but likely — that we’ll have AIs capable of doing almost everything better than we can within a few years. That raises a number of risks for humanity, which he divides into five categories: AI’s going rogue; bad people using AI for ill; bad governments or factions using AI for ill; societal disruptions like mass unemployment; and weird stuff we can’t quite anticipate.

On the first point, the classic “I’m sorry, Dave” scenario of an AI developing evil ideas and acting on them, he carefully dismantles the arguments as to why this is impossible. At the core of Anthropic’s safety work is Claude’s “constitution,” designed to imbue the models with good values so they don’t undertake malevolent actions, and Amodei thinks that will work, but he’s not sure.

“A lot of very weird and unpredictable things can go wrong,” he says. For one thing, if the AI knows you’re testing it, it would also know to be on its best behavior.

AI in the hands of individual or state-sanctioned evil-doers is perhaps a more straightforward risk — and some aspects of that problem, at least, are more straightforward to mitigate. Amodei hopes that Claude’s constitution would prevent it from assisting with a bioweapon. But in case it doesn’t, Anthropic has implemented a “classifier” that specifically blocks bioweapons-related output.

It’s not cheap, raising inference costs by about five percent, Amodei says, noting that not everyone is going to do that sort of work if it’s not legally required. And a lot of other risks may be costly, and difficult, to address.

Amodei’s essay is more than a thought exercise, of course. He advocates regulations focussed on model transparency, similar to what was recently passed in California, with frontier labs required to disclose their policies for model testing and evaluation and for assuring model safety, as he says Anthropic already does voluntarily.

He takes a not-very-veiled swipe at Elon Musk in arguing why strictly voluntary guidelines for safety practices won’t cut it.

“While it is incredibly valuable for individual AI companies to engage in good practices or become good at steering AI models, and to share their findings publicly, the reality is that not all AI companies do this, and the worst ones can still be a danger to everyone even if the best ones have excellent practices,” Amodei writes. “For example, some AI companies have shown a disturbing negligence towards the sexualization of children in today’s models, which makes me doubt that they’ll show either the inclination or the ability to address autonomy risks in future models.”

Amodei has become a target of White House AI advisor David Sacks, who’s accused him of seeking to advantage Anthropic through “regulatory capture,” though Amodei has been consistent in his thinking about AI safety risk since before Anthropic was founded. Sacks and others in the Trump Administration often seem to believe that any dangers AI might pose are exaggerated, that the technology will be an unmitigated good for America so long as we stay “ahead,” and that advocates of regulation or even a broader policy debate are either insincere, misinformed, or both.

Maybe they’re right. But what if they’re wrong?

Amodei makes a compelling case that it’s foolhardy to wave away the idea that bad things could happen, given how much we still don’t know about how AI works and how terrible those bad things could be. You don’t have to be a doomer to see the logic in that.

Whether or not you agree with his recommendations, Amodei deserves credit for trying to push the conversation and engage a much-needed national debate on the opportunities and challenges of AI. In light of the company’s current trajectory, it may be that customers and investors are even rewarding him for his well-reasoned approach. Let’s hope so.


Funding Exclusive

AI Creative Tools Startup OpenArt Raises $30 Million

From movies to the Billboard charts, AI art is creeping across our visual landscape. Former Google PM Coco Mao and senior engineer John Qiao, co-founders of OpenArt, are betting that creators will want access to the top AI visual models all in one place.

The company has shared with Newcomer exclusively that it’s raised a $30 million Series A round led by Canaan Partners. Basis Set and DCM also participated in the round. Basis Set’s Lan Xuezhao previously led OpenArt’s seed round, after a meeting in which Mao picked up Xuezhao from the San Francisco airport and pitched her during the ride to her next meeting.

Founded in 2022, OpenArt started out as a Pinterest-like network for AI-generated images before pivoting to making access to generative AI tools its core offering. Using a desktop website, creators can produce images or videos using cutting-edge photo, video, and audio models from various providers (think Veo 3, Sora 2, and Stable Diffusion) with a single subscription.

OpenArt started out with a consumer focus, but its paying subscribers are often marketers, influencers, or filmmakers that fit into the “pro-sumer” bucket. ARR soared from $10 million to $70 million over the course of 2025 on strong subscription growth.

“I believe the future of content is AI made — but most content will be made with humans using AI tools, “ said Mao.

OpenArt competes directly with other fast-growing AI audiovisual tool startups like Krea, and the PE-owned European giant Freepik. Krea has raised more funding, with around $83 million to date, but also has a larger team — 40 employees compared to OpenArt’s 20.


Newcomer Podcast

Tony Fadell Unfiltered on Apple, OpenAI & the Next Big Device

This week on the Newcomer Podcast, we were joined by Tony Fadell, the creator of Apple’s iPod and all-around hardware legend in Silicon Valley.

We talk about where the next major tech device might come from, whether it’s a pin, a pen, headphones, or the device already in your pocket, and how Apple and other major tech companies are approaching the future of hardware.

We also discuss the rumors surrounding Fadell as a potential contender for the next CEO of Apple, what he would do if he were in that role, and how leadership decisions at that level actually get made. Fadell shares his view on why OpenAI is pursuing a strategy of becoming too big to fail, and what that signals about the next phase of the industry.

Listen Now


One Big Chart

Top Ten Metros For Startup Funding Includes Some Surprises

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Eric Newcomer · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture