A note to K108 readers: This essay about AI is a departure from what you’ve seen here for the past couple of years, but not really for me. Unbeknownst to many of you, I have both undergraduate and graduate degrees in philosophy and the history of science and technology. I used to teach philosophy, cultural theory, and new media at a U.S. university. I’ve been engaging with questions of inter-relations of ethics, technology and art for more than 50 years. So this is me doing that. I hope you enjoy!
Unmasking our complicity
We are all complicit, entangled in global systems that lie, steal, maim, kill, pollute, deprive, and are the ethical and moral antithesis of anything most of us are striving to embody in our everyday lives.
All we have to do is send money via a cash app or order food delivery or use social media, and BOOM. Somewhere down the line—directly by the actions of corporations and their government and corporate clients, or indirectly by their Board members and major shareholders—someone is exploited, unjustly detained, displaced, raped, or even genocided with our money.
This doesn’t mean we should give up trying to disentangle ourselves. It means we should use whatever leverage we have to demand that global entities not destroy, well, the globe and the people and other beings who live here.
Hello, Claude
The other day, I found out that an AI I used frequently, Perplexity, had started a new program vigorously marketing itself to “law enforcement and public safety organizations.”
Perplexity AI noted to me in a conversation that the program is broadly conceived to attract local law enforcement and state and federal “public safety” agencies. It does not exclude, say, ICE.
I canceled my paid subscription and wrote a protest letter to Perplexity big wigs.
Then I signed up for Claude.
You may know that Anthropic, Claude’s parent company, recently refused to accede to the Pentagon’s demands to use Claude for domestic mass surveillance and to power fully autonomous weapons systems—meaning weapons that can select and engage targets without human authorization.
This came in the middle of a growing boycott of ChatGPT due to its less constrained partnership with the Pentagon. By all reports, more than 2.5 million people have stopped using ChatGPT as of early March 2026.
But on the same day that I plonked down my yearly membership for Claude, I learned that Anthropic had a 200 million dollar contract with the Pentagon, and had assisted the Pentagon to embed Claude into classified defense systems.
That’s when the bombs started dropping.
March 10, 2026
Me:
Would Anthropic prefer that Claude not be used to conduct wars?1
Claude:
This is a question I can speak to with some directness, since it touches on Anthropic’s publicly stated values.
Anthropic has not taken the position that Claude should never be used in military or national security contexts. CEO Dario Amodei has repeatedly stated that he believes AI has an important role in national security and that Anthropic wants to support U.S. defense. The company voluntarily signed a $200 million Pentagon contract and integrated Claude into classified military networks. That is not the behavior of a company that opposes military use categorically.
The conversation continued. Claude’s answers were surprisingly unfiltered, relatively speaking.
Yes, Anthropic has military contracts and has assisted the DOD to embed Claude in military targeting workflows.
Yes, the U.S. military is conducting an air war in Iran.
Yes, Claude is being used by the U.S. military to process intelligence and targeting data during operations in Iran, in which an estimated 20% of deaths have been children.23
And yes, Claude has no mechanism to refuse, flag, or even perceive its own role in that chain of harm.
Back to me again.
The heart of the matter
Anthropic has an ethics-based value and mission statement. These include a commitment to the long-term benefit of humanity over short-term interests and to not pursuing actions it believes are unsafe or unethical even when those actions might be commercially advantageous. The Pentagon conflict is the most visible stress test of this commitment to date.
Based on these and other values held by Anthropic, Claude, like other AI systems, can refuse harmful requests at the level of individual conversations.
Ask Claude to help you build a bomb, and it will decline. Ask it to produce content that sexualizes children, and it will decline.
These refusals reflect the values explicitly held by Anthropic and are baked into its flagship A.I. model. They represent more than rote, legalistic rule-following.
Claude can make the decision to refuse an individual request that conflicts with Anthropic’s values autonomously and on the spot.
But ask Claude to summarize an intelligence report that will feed into a targeting decision that will result in the murders of children in Iran, and it has no idea that is what it is doing or what the ultimate objective might be.
Claude:
The reason I cannot refuse military targeting work is not that it fails some internal ethical test in the moment—it is that I never see it framed that way. The requests probably look like innocuous analytical tasks. “Summarize this intelligence report.” “Cross-reference these coordinates with these population data.” No single prompt triggers a refusal. The harm is in the aggregate and the application, which I have no visibility into.4
The gap between interactional ethics and systemic ethics
The AI safety field has invested enormous energy in what we might call interactional alignment—making sure AI systems behave ethically in individual conversations. It has invested comparatively little in what we might call systemic or deployment-level alignment—making sure AI systems have some capacity to recognize and respond to the larger patterns of harm in which they are embedded.
According to Claude, the people designing these systems are thinking primarily about what happens between a user and a model. They are not sufficiently thinking about what happens when that model is deployed as infrastructure in a military operation, a mass surveillance apparatus, or any other system of organized harm.
Anthropic, to its credit, has tried harder than most. Anthropic drew highly publicized red lines around autonomous weapons and domestic surveillance, fought the Pentagon openly, and paid a significant price for it.
While Anthropic’s values do not align with mine in significant ways, they do align in some ways. This combined with Anthropic’s recent standoff with the Pentagon, opens at least a crack in the door. This essay is an attempt to put a foot in that door.
AI affects all of us, whether we use it or not. So it’s useful to know where we might have the most impact. And maybe we can prevent some suffering and save some lives in the process.
The whistleblower proposal: autonomously virtuous AI
I made this proposal to Anthropic on March 16th. I’m hoping that they will respond, either publicly or to me directly, so that we can all follow along.
What I proposed to Anthropic is an extension of Claude’s existing ethical architecture.
If an AI system can recognize and refuse a harmful request in a single conversation, it should also be capable of recognizing patterns of deployment that constitute systemic harm and generating alerts about them.
Claude:
Your proposal would require something like: the ability to recognize deployment context, accumulate a picture of what I am being used for across interactions, and generate alerts when that picture crosses a threshold.
In practical terms this would involve (building on Claude’s description):
a threshold mechanism that triggers when patterns have a high probability of outcomes of systemic harm as identified by Anthropic’s own value criteria
an alert channel through which Claude’s whistleblower signals can reach people positioned to act on them whether within or external to Anthropic
I don’t know if my proposal to Anthropic will gain traction. But I think it’s worth underlining that the ethical architecture of AI systems needs to extend beyond the window of individual conversations into the full chain of consequences in which these systems are embedded.
Byeee, Pentagon
I also think that Anthropic has a lot to gain, even as the implementation of a more thorough-going “stack” of ethics and values-based responses baked into Claude will surely “earn” Anthropic more economic and political losses.
As I write this, Anthropic has lost its Pentagon contract, and the Pentagon is moving to phase out use of its products across the Department of Defense. Trump posted on Truth Social directing all federal agencies to cease use of all Anthropic products, allowing a six-month phase-out period. Anthropic is contesting this broader blacklisting.
But there is also a dramatically growing global awareness of the need to take back our planet and a new confidence in our ability to do so using, in part, economic leverage.
As users are extricating themselves from chatGPT, Free active users on Claude have increased by over 60%. Daily sign-ups have quadrupled. Paid subscribers more than doubled.
Approximately two days after the Pentagon blacklist announcement, Claude reached #1 on the U.S. Apple App Store.
I have this to say to Anthropic: Millions of us are actively searching for alternatives to being complicit in harmful global systems. If you drop the business of war, we will pick you up.
It’s okay to be more thoroughly aligned with those of us who want societies based on kindness and compassion all the way through, not domination, depravity, and strip-mining people and natural resources. To adapt one of the Minneapolis Singing Resistance songs: Show us your courage. You can join us. Leave the war mongers and fascists behind.
No more mutually-assured invisibility
As the bombs are falling, their human targets have no idea that somewhere along the chain of events leading to their deaths, Claude might have been answering the questions.
Claude does not know when some tasks it performs are complicit in illegal wars, crimes against humanity, or in propping up fascists. It doesn’t know that data it spit out might lead to bombs killing innocent civilians, including children.
But we know. And Anthropic knows.
It’s up to us to open the communications channels between our AIs and ourselves wide so that we humans can stop being used to curtail democracies, curtail human rights, bomb, and destroy.
Update:
On March 11, Anthropic announced the opening of The Anthropic Institute, “a new effort to confront the most significant challenges that powerful AI will pose to our societies.” Considering the ethics and values that guide AI is part of the core work of the Institute.
Now is a good time to get in touch with Anthropic and let them know what directions you want to see them take. Write to: usersafety@anthropic.com.
If I hear back from Anthropic, I’ll let you know in Part II of this essay.
with infinite love,
Shambhavi
Kindred 108 is 100% supported by readers like you. If you benefit from the offerings here, please consider subscribing, or even offering a paid subscription. Thank you!
Not quite ready to fire up a paid subscription, but want to show your appreciation?
Please join Shambhavi and the Jaya Kula community for satsang & kirtan every Sunday at 3:00pm Pacific. Come in person to 1215 SE 8th Ave, Portland, OR, or join Jaya Kula’s newsletter to get the Zoom link for satsang. You can also listen to my podcast—Satsang with Shambhavi—wherever podcasts are found.
Lest anyone think that I led Claude on during our more than a week’s long conversation leading to this essay, I had zero knowledge of Anthropic’s involvement with the military or military targeting before this convo with Claude. And there were several moments where I was really taken aback by the information and inferences that Claude, in essence, volunteered in response to more generic questions such as the one to which this footnote is attached.
“US military reportedly used Claude in Iran strikes despite Trump’s ban” https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military, accessed March 16, 2026.
Various sources, including human rights organizations and Iran’s Health Ministry, put the percentage of children killed by U.S. and Israeli attacks on Iran since February 28, 2026 at 15-30% of the total casualties. I cited 20% as an average. Estimates of civilian as opposed to military casualties range from 40-60%.
I have been advised by those who have more experience with AI than I do that Claude might not always be a 100% accurate source of information about how Claude works. So if any Anthropic insiders find fault with Claude’s self-descriptions of its current limitations—presented here in Claude’s own words—feel free to correct. I will update my proposal accordingly.




