What OpenAI’s OpenClaw hire says about the future of AI agents
Hello and welcome to Eye on AI, with Sharon Goldman filling in for Jeremy Kahn. In this edition: What OpenAI’s OpenClaw hire really means…The Pentagon threatens Anthropic punishment…Why an AI video of Tom Cruise battling Brad Pitt spooked Hollywood…The anxiety driving AI’s brutal work culture.
It wouldn’t be a weekend without a big AI news drop. This time, OpenAI dominated the cycle after CEO Sam Altman revealed that the company had hired Peter Steinberger, the Austrian developer behind OpenClaw—open-source software to build autonomous AI agents that had gone wildly viral over the past three months. In a post on his personal site, Steinberger said joining OpenAI would allow him to pursue his goal of bringing AI agents to the masses, without the added burden of running a company.
OpenClaw was presented as a way to build the ultimate personal assistant, automating complex, multi-step tasks by connecting LLMs like ChatGPT and Claude to messaging platforms and everyday applications to manage email, schedule calendars, book flights, make restaurant reservations, and the like. But Steinberger demonstrated that it could go further: In one example, when he accidentally sent OpenClaw a voice message it wasn’t designed to handle, the system didn’t fail. Instead, it inferred the file format, identified the tools it needed, and responded normally, without being explicitly instructed to do any of that.
That kind of autonomous behavior is precisely what made OpenClaw exciting to developers, getting them closer to their dream of a real J.A.R.V.I.S., the always-on helper from the Iron Man movies. But it quickly triggered alarms among security experts. Just last week, I described OpenClaw as the “bad boy” of AI agents, because an assistant that is persistent, autonomous, and deeply connected across systems is also far harder to secure.
Some say OpenAI hire is the ‘best outcome’
That tension helps explain why some see OpenAI’s intervention as a necessary step. “I think it’s probably the best outcome for everyone,” said Gavriel Cohen, a software engineer who built NanoClaw, which he calls a “secure alternative” to OpenClaw. “Peter has great product sense, but the project got way too big, way too fast, without enough attention to architecture and security. OpenClaw is fundamentally insecure and flawed. They can’t just patch their way out of it.”
Others see the move as equally strategic for OpenAI. “It’s a great move on their part,” said William Falcon, CEO of developer-focused AI cloud company Lightning AI, who said that Anthropic’s Claude products–including Claude Code–have dominated the developer segment. OpenAI, he explained, wants “to win all developers, that’s where the majority of spending in AI is.” OpenClaw, which is in many ways an open source alternative to Claude Code, and became a favorite of developers overnight, gives OpenAI a “get out of jail free card,” he said.
Altman, for his part, has framed the hire as a bet on what comes next. He said Steinberger brings “a lot of amazing ideas” about how AI agents could interact with one another, adding that “the future is going to be extremely multi-agent” and that such capabilities will “quickly become core to our product offerings.” OpenAI has said it plans to keep OpenClaw running as an independent, open-source project through a foundation rather than folding it into its own products—a pledge Steinberger has said was central to his decision to choose OpenAI over rivals like Anthropic and Meta (In an interview with Lex Fridman, Steinberger said Mark Zuckerberg even reached out to him personally on WhatsApp).
Next phase is winning developer trust for AI agents
Beyond the weekend buzz, OpenAI’s OpenClaw hire offers a window into how the AI agent race is evolving. As models become more interchangeable, the competition is shifting toward the less visible infrastructure that determines whether agents can run reliably, securely, and at scale. By bringing in the creator of a viral—but controversial—autonomous agent while pledging to keep the project open source, OpenAI is signaling that the next phase of AI won’t be defined solely by smarter models, but by winning the trust of developers tasked with turning experimental agents into dependable systems.
That could lead to a wave of new products, said Yohei Nakajima, a partner at Untapped Capital whose 2023 open source experiment called BabyAGI helped demonstrate how LLMs could autonomously generate and execute tasks—which helped kick off the modern AI agent movement. Both BabyAGI and OpenClaw, he said, inspired developers to see what more they could build with the latest technologies. “Shortly after BabyAGI, we saw the first wave of agentic companies launch: gpt-engineer (became Lovable), Crew AI, Manus, Genspark,” he said. “I hope we’ll see similar new inspired products after this recent wave.”
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
This story was originally featured on Fortune.com