What Elon Musk is really building inside his ChatGPT competitor xAI

Five years on, Musk is back in the AI game. In July, he announced the formation of xAI—which, despite its use of Musk’s favorite letter, is a separate company from X Corp. “The goal of xAI is to understand the true nature of the universe,” the announcement began. The numbers in the date of the announcement (7/12/23), Musk tweeted, totaled 42: the figure which, in the comic science fiction classic The Hitchhiker’s Guide to the Galaxy, represents the frustrating answer to “the ultimate question of life, the universe, and everything.”  These lofty goals are deeply intertwined with Musk’s idiosyncratic vision of AI safety, in which an inquisitive superintelligence will hopefully decide people are so interesting that it must keep them around. Competitors like OpenAI and Google DeepMind are trying to achieve AI safety by promoting “alignment” with human goals and principles, but Musk believes that trying to instill certain values in an AI increases the odds of the AI adopting the opposite values, with the risk of disastrous results. “I think the safest way to build an AI is actually to make one that is maximally curious and truth-seeking,” he said two days after xAI’s announcement, at a Twitter Spaces event alongside the 11 all-star (and all-male) AI engineers he hired as the company’s starting team—each reportedly received 1% equity in the Musk-funded venture. Rather than let other companies define the future, Musk vowed to “create a competitive alternative that is hopefully better than Google DeepMind or OpenAI-Microsoft.” The world got its first glimpse of this alternative in early November, with the unveiling of an AI chatbot called Grok. Early demonstrations showed a chatbot defined less by a connection to cosmic truths than by a willingness to engage in the snark and vulgarity that rival products try to avoid. Musk also revealed that Grok would be a subscription driver for X (formerly Twitter), serving as a feature of the social network’s Premium+ tier while using X’s tweets as an information source. Missing was any indication of how the wisecracking AI bot fits into the broader Musk portfolio of Tesla autonomous driving technology, humanoid Optimus robots, and Neuralink human-machine brain interfaces, raising questions about the seriousness, and significance, of xAI.  Rivalry (or perhaps revenge) is a clear motivation for Musk. Poaching half a dozen Google DeepMind luminaries is just the latest chapter in a feud that goes back to his discussions with Larry Page in 2015, shortly after Google acquired DeepMind. Musk has claimed the Google cofounder accused him of being “speciesist” for thinking of potential silicon-based life forms as inferior to humans. Alarmed by Page’s comments and what Musk considered Google’s lax approach to AI safety, Musk cofounded OpenAI as a safety-driven counterweight to Google—only to see OpenAI become “frankly voracious for profit” after partnering with Microsoft. “My sense,” said Steve Omohundro, a veteran computer scientist who just coauthored a paper on using mathematical proofs to mitigate AI’s negative effects, “is the reason he [founded xAI] is he’s pissed at OpenAI.”  Ultimate truth-teller, protector of humankind, tool of revenge: These are the roles that Musk likely wants his new AI to play. But the team he’s assembled may have very different—though no less dramatic—aims.

At that inaugural Spaces event in July, all of Musk’s new hires spoke, but none echoed or even addressed their new boss’s theory about a maximally curious superintelligence preserving humanity. What they did talk about was math. Two common strands run through xAI’s initial staffing lineup: Most of the team comes from Google DeepMind, and most have a hard-core mathematics and/or physics background. Therein may lie the truth—a concept not always associated with today’s “hallucinating” generative AI models. “Mathematics is the language underlying all of our reality,” said Greg Yang, a team member who came over from Microsoft, during the session. “I’m also very excited about creating an AI that is as good as myself or even better at creating new mathematics and new science that helps all achieve and see further into our fundamental reality.” Being given an opportunity to unlock new mathematical and physical truths—in a lean, bureaucracy-free company bankrolled by the world’s richest man—was a clear draw for xAI’s team. But the means of achieving that goal is an attractive end in itself, as team member Christian Szegedy (an ex-Googler) explained at the event: “Mathematics is basically the language of pure logic, and I think that mathematics and logical reasoning at the high level will demonstrate that the AI is really understanding things, not just emulating humans.” Solve problems and you get both the answers and confirmation that your AI can think for itself, unlike models such as OpenAI’s GPT-4 that essentially regurgitate their training material. Or so goes the theory—there are various opinions regarding the threshold an AI must cross to become a general-purpose, thinking AGI, or artificial general intelligence. “My sense is the reason he founded xAI is he’s pissed at OpenAI.”Steve Omohundro, longtime computer scientist “I would say the holy grail of AI systems is reasoning, and probably the place where reasoning is most evident is in mathematical inquiry,” said University of Sydney mathematics professor Geordie Williamson, who recently collaborated with Google DeepMind on refining old mathematical conjectures. AI’s ability to make physics breakthroughs is “already happening to some extent,” Williamson added, with neural networks helping to figure out things like the precise boundaries between water’s rarer states. “We have this new hammer called a neural net, and we mathematicians and physicists are going around banging it,” he said. “It’s a new tool that we’re using, but we’re still in the driver’s seat.” But while Williamson says he has seen “glimpses” of reasoning in DeepMind’s biology-focused systems, he says we are “miles away” from definitively reaching that milestone. “Generative AI is amazing, but I haven’t seen any evidence that we’re [seeing] reasoning,” he said. Musk himself has given mixed signals regarding the AGI inflection point. At the xAI launch, he said he agreed with futurist Ray Kurzweil that AGI would likely emerge around 2029, “give or take a year.” But in an October Tesla earnings call, he described his cars’ AI system as “basically baby AGI—it has to understand reality in order to drive.” While the pursuit of validating AGI via math is a concept that has wider momentum in the industry, Musk’s theory about the inherent safety of a truth-seeking AI draws skeptical responses from many experts.