
Ronan Farrow and Andrew Marantz investigate Sam Altman’s leadership of OpenAI, based on internal documents and more than 100 interviews. They center on a core tension: Altman has positioned himself as a steward of humanity’s most powerful technology, yet many colleagues and insiders question whether he can be trusted with that responsibility. Internal memos compiled by senior figures—including chief scientist Ilya Sutskever—allege a pattern of misleading statements and evasiveness, particularly around AI safety and governance. Shocking, ain’t it?
The piece traces OpenAI’s evolution from a nonprofit founded to prioritize safety over profit into a commercially driven company pursuing massive scale and valuation. Along the way, Altman is portrayed as highly ambitious, politically savvy, and willing to push boundaries—sometimes at the expense of transparency or institutional safeguards.
It also situates these concerns within the broader stakes of artificial general intelligence: if such systems emerge, the individuals controlling them could wield unprecedented global power. The article ultimately raises an unresolved question—whether the rapid centralization of technological authority in a single leader and company is compatible with the level of trust and accountability that such power demands.
