After a year of basking in global fame, the San Francisco company OpenAI is now confronting a multitude of challenges that could threaten its position at the vanguard of artificial intelligence research.
Some of its conflicts stem from decisions made well before the debut of ChatGPT, particularly its unusual shift from an idealistic nonprofit to a big business backed by billions of dollars in investments.
It's too early to tell if OpenAI and its attorneys will beat back a barrage of lawsuits from Elon Musk, The New York Times and bestselling novelists such as John Grisham, not to mention escalating scrutiny from government regulators, or if any of it will stick.
Feud with Elon Musk
OpenAI isn't waiting for the court process to unfold before publicly defending itself against legal claims made by billionaire Elon Musk, an early funder of OpenAI who now alleges it has betrayed its founding nonprofit mission to benefit humanity as it pursued profits instead.
In its first response since the Tesla CEO sued last week, OpenAI vowed to get the claim thrown out and released emails from Musk that purport to show he supported making OpenAI a for-profit company and even suggested merging it with the electric vehicle maker.
Legal experts have expressed doubt about whether Musk's arguments, centered around an alleged breach of contract, will hold up in court. But it has already forced open the company's internal conflicts about its unusual governance structure, how "open" it should be about its research and how to pursue what's known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.
Its own internal investigation
There's still a lot of mystery about what led OpenAI to abruptly fire its co-founder and CEO Sam Altman in November, only to have him return days later with a new board that replaced the one that ousted him. OpenAI tapped the law firm WilmerHale to investigate what happened, but it's unclear how broad its scope will be and to what extent OpenAI will publicly release its findings.
Among the big questions is what OpenAI — under its previous board of directors — meant in November when it said Altman was "not consistently candid in his communications" in a way that hindered the board's ability to exercise its responsibilities. While now primarily a for-profit business, OpenAI is still governed by a nonprofit board of directors whose duty is to advance its mission.
The investigators are probably looking more closely at that structure as well as the internal conflicts that led to communication breakdowns, said Diane Rulke, a professor of organizational behavior and theory at Carnegie Mellon University.
Rulke said it would be "useful and very good practice" for OpenAI to publicly release at least part of the findings, especially given the underlying concerns about how future AI technology will affect society.
"Not only because it was a major event, but because OpenAI works with a lot of businesses, a lot of companies and their impact is widespread," Rulke said. "Even though they're a privately held company, it's very much in the public interest to know what happened at OpenAI."
Government scrutiny
OpenAI's close business ties to Microsoft have invited scrutiny from antitrust regulators in the U.S. and Europe. Microsoft has invested billions of dollars into OpenAI and switched on its vast computing power to help build the smaller company's AI models. The software giant has also secured exclusive rights to infuse much of the technology into Microsoft products.
Unlike a big business merger, such partnerships don't automatically trigger a government review. But the Federal Trade Commission wants to know if such arrangements "enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition," FTC Chair Lina Khan said in January.
FTC is awaiting responses to "compulsory orders" it sent to both companies — as well as OpenAI rival Anthropic and its own cloud computing backers, Amazon and Google — requiring them to provide information about the partnerships and the decision-making around them. The companies' responses are due as soon as next week. Similar scrutiny is happening in the European Union and the United Kingdom.
Copyright lawsuits
Bestselling novelists, nonfiction authors, The New York Times and other media outlets have sued OpenAI over allegations that the company violated copyright laws in building the AI large language models that power ChatGPT. Several of the lawsuits also target Microsoft. (The Associated Press took a different approach in securing a deal last year that gives OpenAI access to the AP's text archive for an undisclosed fee).
OpenAI has argued that its practice of training AI models on huge troves of writings found on the internet is protected by the "fair use" doctrine of copyright law. Federal judges in New York and San Francisco must now sort through evidence of harm brought by numerous plaintiffs, including Grisham, comedian Sarah Silverman and "Game of Thrones" author George R. R. Martin.
The stakes are high. The Times, for instance, is asking a judge to order the "destruction" of all of OpenAI's GPT large language models — the foundation of ChatGPT and most of OpenAI's business — if they were trained on its news articles.
Matt O'Brien is an AP technologyh writer. AP business writer Kelvin Chan contributed to this report.