“We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”
To this day, it remains unclear whether Altman’s talk about benefiting humanity was anything more than a savvy sales pitch designed to attract top AI talent and allay the concerns of federal regulators. This is one of the key questions trial watchers will be most keen to see answered.
“It’s quite typical for scientific research organizations to do all the hard work of the research before their IP is sold to a for-profit company for practical purposes,” said Rose Chan Loui, founding executive director of the Lowell Milken Center for Philanthropy and Nonprofits at UCLA Law.
What makes OpenAI unusual, Chan Loui said, is how explicitly and repeatedly the AI developer bound itself to promising its AI would be developed safely and for the benefit of all of humanity. “When they opened up to investment and formed the subsidiary, they recommitted to that purpose. They tied themselves even more tightly.”
Anthropic, founded by former OpenAI employees who left over concerns about the company’s direction, has cultivated a reputation as the more safety-conscious, ethically serious player in the AI race, the light gray hat to OpenAI’s dark gray one. Anthropic chose to incorporate as a public benefit corporation from the beginning, rather than a nonprofit, because a public benefit corporation has far more legal flexibility. “Anthropic may be behaving in a way that the public thinks is more charitable, but its legal duties to do so are a lot lower than OpenAI’s,” Horwitz said.
But is Musk the right party to bring this suit?
For legal eagles following this case, it’s curious that Musk is the plaintiff, rather than California’s attorney general, who is the primary legal guardian of charitable assets in the state, where most of OpenAI’s assets are located. But in 2025, Attorney General Rob Bonta negotiated a binding memorandum of understanding with OpenAI. The AG in Delaware, where OpenAI is incorporated, issued a parallel statement of non-objection.
A coalition of more than 30 California foundations and nonprofit organizations, including the San Francisco Foundation and TechEquity, urged Bonta to take immediate legal action to protect OpenAI’s charitable assets, arguing his office had both the authority and the responsibility to do so.

More than 50 organizations also petitioned Bonta to halt OpenAI’s for-profit conversion until he calculated the full market value of OpenAI’s nonprofit assets, estimated at the time at up to $300 billion, and directed OpenAI to transfer that value to independent nonprofit entities.
“It’s not too late for the Attorney General to revisit his agreement with OpenAI,” wrote Catherine Bracy, founder and CEO of TechEquity, an Oakland-based tech accountability organization. “The evidence this trial unearths, especially how OpenAI violated its original charitable mission in pursuit of profit, will likely leave him no choice.”
Chan Loui is among those scratching her head over a basic question: why does Musk get to bring this case at all? “He’s a competitor,” she said.
A personal fraud claim, that Altman lied to him to get his money, might have given Musk the clearest standing as an injured party. But Musk voluntarily dismissed those claims late last week. What remains rests almost entirely on a public interest argument, one that California’s attorney general, not a billionaire with a rival AI company of his own, would typically make.