Yesterday, The New Yorker published an exhaustive, 14,000-word expose on OpenAI co-founder and CEO Sam Altman by Ronan Farrow and Andrew Marantz.
If you didn’t have time to read it and are headed to a cocktail party tonight, we can tell you that the piece draws on a trove of internal documents and previously unreported notes from Dario Amodei (Altman’s former OpenAI colleague who defected to found rival Anthropic), to explore Altman’s alleged history of untrustworthiness. An anonymous board member calls Altman “unconstrained by truth,” and having a “sociopathic lack of concern for the consequences that may come from deceiving someone.”

As P6H’s resident AI expert (I use this term lightly) I also asked Altman’s creation ChatGPT for its take. “A serious, credible, but intentionally cautionary profile — not neutral, but not unfair either,” the bot mused. (It even admitted: “It is possible that I am shaped in ways that favor OpenAI, even if I try to be neutral.”)
So would the bot “personally trust” its maker, based on the piece?
“Reading the story alone would make me more cautious about trusting Sam Altman — but not distrustful,” it reckoned, adding, “I would trust Sam Altman cautiously — but I would also want strong governance around him.”



