Creators' AI

Creators' AI

Share this post

Creators' AI
Creators' AI
Can OpenAI Still Be Trusted? NDAs, Leaks, Lies & More

Can OpenAI Still Be Trusted? NDAs, Leaks, Lies & More

The OpenAI Files Explained

Creators' AI's avatar
Alamir Aqel's avatar
Creators' AI
and
Alamir Aqel
Jun 25, 2025
∙ Paid
5

Share this post

Creators' AI
Creators' AI
Can OpenAI Still Be Trusted? NDAs, Leaks, Lies & More
1
Share

So… The OpenAI Files report was published on June 18, 2025, by The Midas Project and The Tech Oversight Project. And no, it’s not just some internet drama. This thing is packed with contracts, testimonies, leaked docs, and public receipts.

This report doesn’t accuse OpenAI of breaking laws. It accuses them of breaking trust.

While we touched on this briefly here, today we’ll discuss what’s inside and why it matters to you.

Let’s get into it.


Keep your mailbox updated with practical knowledge & key news from the AI industry!


Sam Altman’s Credibility Problem

The CEO of OpenAI, Sam Altman, is at the center of concerns in the report.
For example, it shows he approved NDAs that punished employees for speaking out (more on this later). He later told the public he had no idea those NDAs existed.

Except… the docs had his signature. Whoops.

Even within OpenAI, some leaders didn’t trust him. CTO Mira Murati and Chief Scientist Ilya Sutskever reportedly accused him of manipulative behavior and described his leadership as “psychologically abusive.”

The report even reaches back to his first company, Loopt. Leadership there allegedly tried to remove him, accusing him of chasing personal projects and lying about trivial matters.

People change over time, but the report suggests some of these concerns never fully went away.

That brings us to a more serious issue.

Altman testified under oath before the Senate that he had no equity in OpenAI. Later, he admitted he did, through a Sequoia fund, which he conveniently sold afterward.

Whether that was technically perjury is up to the lawyers. But to the rest of us, it’s clear misdirection.

These inconsistencies matter because they call into question whether the person steering the most powerful AI company on Earth can be trusted to put collective benefit ahead of personal or institutional gain.

“Non-Profit” With Billion-Dollar Aspirations

Speaking of institutional gain, OpenAI started as a non-profit, remember? A noble vision: “AGI for everyone.” Fast-forward to 2025, and they’re transitioning into a Public Benefit Corporation (PBC) with no profit cap.

Yes, the same cap that was supposed to keep things ethical and in check? Gone.

And for those who don’t know, the company was structured as a capped-profit entity. Investors could make a return of up to 100x. Beyond that, excess profit would revert to the nonprofit.

OpenAI’s argument? The world’s changed. The AGI race got crowded. To compete, they needed to act like a normal company. But their original charter already anticipated multiple AGI builders.

And if you didn’t know: A PBC is not legally required to prioritize public benefit. It just has to consider it. You can imagine how that goes when billions are at stake.

And when the line between mission and money starts to blur, it’s worth asking who’s really in the room making the calls.


Liking this post so far? Feel free to share it!

Share


Conflicts of Interest in the Boardroom

Keep reading with a 7-day free trial

Subscribe to Creators' AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
A guest post by
Alamir Aqel
Remote ops specialist & project manager obsessed with efficiency. I automate workflows, scale systems, and simplify the complex. Always experimenting. Always delivering.
Subscribe to Alamir
© 2025 Creators' AI
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share