Slick Sam
I’m old enough to remember when a young, preppy-looking entrepreneur burst onto the artificial intelligence scene in 2015. As president of a new company called OpenAI, Sam Altman was hailed as the living antidote to the existential dread voiced by the “Godfather of A.I.” Geoffrey Hinton:
“I am scared that if you make the technology work better,
you help the National Security Agency misuse it more.
I’d be more worried about that than about autonomous killer robots.”
And the late cosmologist Stephen Hawking:
“The development of full artificial intelligence could spell
the end of the human race.”
Altman’s nonprofit was embraced by the AI community for championing open research and safety. Sam represented a principled counterweight to profit-driven tech giants—data vultures whose business models depend on large-scale extraction of personal data. He positioned OpenAI as a mission-driven entity unconstrained by financial returns, focused on advancing safe AI to benefit humanity.
OpenAI was unconstrained by financial returns—until it wasn’t.
By 2019, OpenAI morphed into a “capped profit” subsidiary, marking the first of many pivots. Critics—including the board members who attempted to oust him in 2023—accused Sam of prioritizing growth and personal equity over safety. The ouster failed, and by 2024, Sam was back in charge. Shortly after that, OpenAI was restructured again, significantly weakening the nonprofit’s governing authority.
Since its founding, OpenAI has revised its mission statement six times, progressively narrowing its safety language. Compare the 2015 launch language capturing the raw, idealistic nonprofit ethos with subsequent corporate hedges:
2015: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
2023: “To build general-purpose artificial intelligence that safely benefits humanity, unconstrained by a need to generate financial return.”
2026: “To ensure that artificial general intelligence benefits all of humanity.”
When the OpenAI board moved to boot Sam, one of the principal concerns cited was his pattern of telling stakeholders what they wanted to hear—behavior some critics described as outright lying. That pattern resurfaced as he was reportedly negotiating a parallel defense agreement while Anthropic, another AI company, was in the middle of Pentagon negotiations. Things were going badly.
Anthropic was being smeared by the Trump administration as a bunch of “left-wing nut-jobs.” Defense Under Secretary Emil Michael led the charge, labeling Anthropic a “radical left, woke company” run by a CEO with a “God-complex” for refusing the Pentagon’s terms.
What was Dario Amodei, Anthropic CEO, refusing?
Amodei was refusing contract language that offered limited guardrails against mass surveillance of Americans or the deployment of fully autonomous lethal systems. For this stance, the threat of retaliation—deployed with abandon by the current regime—loomed as a “supply chain risk” designation, a corporate death sentence.
In late February, hundreds of Google and OpenAI employees signed an open petition “We Will Not Be Divided” in solidarity with Anthropic. The petition continues to gather signatures and has now reached nearly a thousand current and former employees.
We are the employees of Google and OpenAI, two of the top AI companies in the world. We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Hours later, as the Anthropic negotiation deadline loomed, Sam Altman announced that OpenAI had inked its own contract with the Pentagon. In an open letter titled “Our Agreement with the Department of War,” he cited two key provisions that look like concessions to power:
The Department of War may use the AI system for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
The words “applicable law” should send shivers down the spine of anyone who has witnessed the Department of Homeland Security bypass the Fourth Amendment. Existing surveillance frameworks already permit government agencies to purchase commercially available data that would otherwise require a warrant to obtain. The promise not to use AI for domestic surveillance collapses if agencies can lawfully acquire equivalent data through brokers.
The so-called “data broker loophole” allows agencies to buy what they cannot legally seize. High-profile apps and services have faced scrutiny for transmitting sensitive location and behavioral data to third-party brokers that contract with government agencies. Even routine consumer platforms now incorporate automated data-processing features which users neither explicitly requested nor meaningfully control.
The Weather Channel app was caught covertly tracking and selling location data to DHS-linked brokers.
Grindr & Tinder were found sharing sensitive medical data (including HIV status) with the very ad-tech firms the government uses to map “behavioral risks.”
Even Gmail accounts now include AI overviews auguring surveillane creep into our daily tools.
Against this backdrop, Altman stated on TwitterX that “the Department of War displayed a deep respect for safety.”
It’s notable that Sam has adopted the Department of War branding—the unofficial name of the Department of Defense. It is even more consequential that he expresses confidence in safety compliance under the leadership of Defense Secretary Pete Hegseth, who recently posted:
“No stupid rules of engagement, no nation building quagmire, no democracy building exercise, no politically correct wars. We fight to win, and we don’t waste time or lives… War is hell and always will be.”
Hegseth’s comments do not reflect a culture oriented toward restraint or “applicable law.”
Somewhere along the line, Sam’s posture changed. In 2017, Sam tweeted, “I think Trump is terrible and few things would make me happier than him not being president,” comparing certain political tactics to 1930s Germany.
That was the language of an outsider. Today, Sam speaks as a partner to power.
Within days, Sam’s contract with the Pentagon was revised. It is hard to assess whether any new language will make a difference since it still hinges on “applicable law.”
Sam Altman now echoes Peter Thiel’s 2009 view that “democracy and freedom are no longer compatible.” The underlying preference—in both government and business—is the CEO model: centralized authority, streamlined decision-making, minimal friction. Efficiency over accountability. No scrappy unions in business. No messy democracy in government. In the end, he has traded the “Godfather of AI’s” existential warnings for a seat at the table of AI-driven authoritarian power.
What emerges is not merely a tale of one executive’s evolution but an emerging alliance between frontier AI firms and the national security establishment at a moment when oversight mechanisms are fragile and regulation is virtually nonexistent. In Congress, the “Fourth Amendment Is Not For Sale Act” is stalled while Trump pressures the Senate to win the AI race at all costs. Trump has a few months left until the midterms. His influence may suddenly change. Maybe. Until then, our best hopes lie elsewhere.
So here we are. Two men—one a defense secretary who publicly embraces apocalyptic war rhetoric; the other, a smooth-talking titan—control the trajectory of surveillance and autonomous military weapons integration that could redefine constitutional boundaries.
Neither is an elected official.
Neither has earned our trust.
Neither really cares what “we the people” or our representatives think.
They work in a regime that treats the law as optional. A president who writes punitive executive orders against law firms, bypasses Congress to wage war, and defies judges on immigration creates an environment where “applicable law” becomes dangerously elastic.
If companies willing to demand safeguards are sidelined—if Anthropic falls—the burden shifts to the courts. In the vacuum of Congressional courage, the final barrier between constitutional protections and large-scale AI intrusion may be a judge with a very short fuse—at least until that judge is overturned by the Supreme Court.
These are some of the Courtside Warriors holding the line on authoritarianism— fighting to save democracy—and winning. Sign up for their newsletters. Support them if you can.
Democracy Forward, Public Citizen, Protect Democracy, Democracy Docket, League of Women Voters, Campaign Legal Center, ACLU, the NAACP Legal Defense Fund, Citizens for Responsibility and Ethics in Washington, Democracy Defenders Fund, Brennan Center for Justice, Common Cause There are many more.
The Courts—Especially the Supreme Court—Won’t Save Us.
Nevertheless, we’ve got to support our Courtside Warriors any way we can.
Just Security Litigation Tracker
On January 29, 2025 there were 24 legal challenges
to Trump Administration actions.
As of March 4, 2026, there are now 673…and counting


