The evolution of OpenAI’s mission statement.
OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”
While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.
OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.
As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.
And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.
ai serves society? you are boozing brake fluid
Don’t be evila test for whether AI serves society or shareholders
Gee I wonder which one’s gonna win.
Who has more money? OpenAI needs buttloads of it right about now from all the promises they have made.
We’re so damn lucky that LLMs are a dead end (diminishing returns on scaling even after years of hunting) and they just pivoted to the biggest Ponzi scheme ever, bad as that is (and the economic depression it will cause), it pales into insignificance compared with the damage these fucks would do with AGI (or goddess forbid ASI with the alignment they would try to give it).
EVERYTHING IS FINE

benefit humanity as a whole
The Borg from Star Trek fills that requirement. My headcanon is that the people from its home planet made an AGI with the given goal of “benefiting humanity as a whole”, and it maximised that goal by building the Borg - making humanity as a whole by connecting them to a hive mind and forcibly assimilating all other species to benefit humanity as a whole.
Gotta be honest, I thought that was the reason.
Oh what a cool take!
I always liked to think of the Borg as being almost more like an emergent property of a certain level and type of organic/inorganic interfacing.
So it’s not that one species was the Borg, all are in potentia. And every time a species commits the same error or reaches the correct level of “perfection”, they find themselves in a universe where they were already existent.
Like a small hive self-creates, opens its mental ears and is already subsumed into the greater Borg whose mind it finds.
I like that it adds almost a whole new level of arrogance to their statement, “Resistance is futile.” They believe it not only because they are about to physically assimilate you, but because every advance you make brings you potentially closer to being Borg through your own missteps.
OpenAI is the same as any other publicly traded corporation: it serves society, but this service primarily focuses on the shareholders. We’re looking at a vehicle designed to take money, and give it to shareholders. (private in this case or otherwise)
Focus on growth of data centres at public expense, AI slop, the circular nature of some of the investments going into AI, and the productivity (or lack of), are part of it. We are not looking at any exceptionalism. AI isn’t unique in its capability for catastrophic harm. What we eat and drink can easily be on that list.
AI and these American companies, just want the money train to continue unabated, and any regulation to go away.
Dodge v. Ford Motor Company, 1919.
This case found and entrenched in US law that the primary purpose of a corporation is to operate in the interests of its shareholders.
Therefore OpenAI, based in California, would be under threat of lawsuit if they didn’t do that.
This goose is already cooked.
“Safely” was already an empty promise to begin with, given how LLMs work.
So someone just thought “our investors don’t value safety, let’s get rid of that on the blurb”. They are probably correct.
When they had their schism over Altman a couple years ago safety died.
… its new structure is a test for whether AI serves society or shareholders
Gee, I can’t wait to see the results of this test!
The government only cares about your safety when they need to push laws through the system so their rich buddies can save a dime. “But it’s for your own good”, they say. What about the children?
lol, lmao even
Either they think they’re evil 1337 h4xx0r overlords that are gonna enslave the planet or they genuinely think their statistical apparati do anything worthwile outside of making statistics on by now 70% other statistical machines.
+just wait until AI bros about Zip compression being more efficient at classifying than “AIs”.
The marketing department removed some meaningless word from the marketing bla-bla-bla brochure nobody was even supposed to read.
WE ALL ARE GOING TO DIE!!!







