Are Monsters Real?


Posted on May 5, 2026  /  0 Comments

In 1942, Isaac Asimov published a short story called Runaround, featuring a robot named ‘Speedy’, sent to collect minerals on Mercury. Speedy, unfortunately, gets stuck in a loop: caught between two of his own programmed laws, endlessly circling a pool of selenium, unable to break free. The story gave the world Asimov’s Three Laws of Robotics: (i) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (ii) a robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and (iii) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The genius of Runaround is that Asimov didn’t use these laws to show what happens when robots work. He used them to show what happens when they don’t.

Eighty-four years later, Speedy is no longer fictional. He has a sleek interface, a natural language model, and a multimillion-dollar government contract. And we are still, very much, watching him circle the selenium pool.

Laws for Machines (Written by Humans, For Humans)

Mary Shelley got there first, of course. Frankenstein, published in 1818, a full century before the word “robot” was even coined, posed the question that has haunted every iteration of the conversation since: when you build something powerful enough to act in the world, who is responsible for what it does? The monster doesn’t sue Frankenstein. But perhaps it should have.

Jack M. Balkin, Knight Professor of Constitutional Law at Yale, picked up this thread in a 2017 essay titled The Three Laws of Robotics in the Age of Big Data, published in the Ohio State Law Journal. His argument is elegant and worth sitting with. The robots, Balkin insists, are not the problem. The humans who build, program, and deploy them are. Asimov wrote laws to constrain the machines. Balkin writes laws to constrain the people holding the remote control.

His framework has three pillars. First, operators of algorithms and AI agents are information fiduciaries. They owe special duties of good faith and fair dealing to their end-users. Second, even those who are not formal fiduciaries still carry duties toward the general public. Third, and most bracingly, no company may leverage its computational advantages to offload the costs of its technology onto everyone else, what Balkin calls becoming an algorithmic nuisance.

The fiduciary concept is the interesting one. We already accept it in other domains without much fuss. Doctors owe you a duty of care. Lawyers cannot act against your interests. Financial advisers must put your money ahead of their commissions. When you speak to an AI assistant, Balkin argues, you are not simply talking to an appliance. You are speaking to a corporation that is privy to the most intimate details of your life, and that corporation should have a fiduciary duty to deal with you in a trustworthy fashion.

This was the theoretical case, written in 2017 before most people had heard of large language models. Then reality caught up with the theory and made it considerably more dramatic.

The Deadline Was 5:01 p.m.

On February 24, 2026, US Defense Secretary Pete Hegseth sat down with Dario Amodei, CEO of Anthropic, and gave him a deadline. Relent by 5:01 p.m. on Friday, February 27, and allow unrestricted use of Claude, Anthropic’s AI model, for “all legal purposes.” If not, the Pentagon would terminate Anthropic’s contract and deem the company a supply chain risk. That designation, as it happens, is typically stamped on foreign adversaries with connections to hostile states. Not on American startups operating out of San Francisco.

The 5:01 p.m. deadline is a nice touch. One imagines a Pentagon aide carefully choosing the extra minute for maximum menace and pure high drama.

Amodei’s objections were specific. Anthropic was unwilling to allow Claude to be used for AI-controlled weapons and for mass domestic surveillance of American citizens. These were not vague philosophical anxieties. Anthropic’s position was that today’s frontier AI models are simply not reliable enough for fully autonomous weapons, and that mass domestic surveillance of Americans constitutes a violation of fundamental rights. He held his red lines. The Pentagon fired back, calling the company “sanctimonious” and accusing Amodei of a “God-complex” who wanted to “personally control the US Military.”

Friday came. Trump ordered federal agencies to immediately cease their use of Anthropic’s tools. Hegseth declared the company a supply chain risk. The all-caps Truth Social post followed, calling Anthropic a “radical left, woke company” that would never be allowed to dictate how the great American military fights its wars.

(Meanwhile, Claude shot to the top of Apple’s App Store. Consumers, it turns out, have their own form of democratic participation.)

The Other CEO

Within hours of Anthropic’s blacklisting, Sam Altman (OpenAI’s CEO), Amodei’s former colleague and longtime rival, announced that OpenAI had struck a deal with the Pentagon. The agreement allowed the Department of War to deploy OpenAI’s models in its classified systems, with Altman claiming he had secured restrictions on mass surveillance and autonomous weapons similar to those Anthropic had sought.

The timing was, at a minimum, unfortunate. Altman himself later admitted the deal “looked opportunistic and sloppy.” He said OpenAI had moved quickly to de-escalate, but critics, including some of his own employees, weren’t buying it. One scholar at the Wharton School accused OpenAI of undercutting Anthropic at a critical moment and said the rest of the AI industry had failed to close ranks in solidarity. Chalk graffiti appeared outside OpenAI’s San Francisco offices. (The sidewalk outside Anthropic’s offices was, by contrast, largely supportive.) Altman subsequently renegotiated his deal, adding explicit surveillance prohibitions. He acknowledged the original agreement had been “rushed.”

What we have here are two men who, by their own accounts, share the same principles. Both publicly opposed mass surveillance and autonomous weapons, but behaved in completely different ways when power applied pressure. One held the line and paid for it commercially. The other signed the contract and spent the next week explaining himself on social media.

The contrast is philosophically tidy. It is also a case study in what happens to stated values when the stakes become real.

On Fiduciary Duty, Public Duty, and the Shape of Power

Was Amodei’s refusal a fiduciary duty? Technically speaking, under Balkin’s framework, the fiduciary relationship runs between Anthropic and its users, the people who interact with Claude, who entrust it with their questions, their documents, their secrets. The duty there is one of loyalty and good faith. Allowing Claude to be repurposed as a surveillance instrument against those very users would be, in Balkin’s terms, a textbook breach. The company would be deploying its computational advantage to harm the people who gave it their trust in the first place, the definition of an algorithmic nuisance.

But Amodei invoked something larger. He specifically objected to the use of Claude for mass surveillance of American citizens and for fully autonomous weapons systems. Those are not his customers. Those are the public. That is Balkin’s second law: the duty of AI operators toward uninvolved third parties, people with whom they have no contract, but who bear the consequences of the technology regardless. You might call it a public duty. You might also call it a conscience. The two are not always as different as lawyers would prefer.

Philosophy has its own vocabulary for what happens when power goes unchecked. Montesquieu argued in The Spirit of the Laws (1748) that power must be distributed to prevent tyranny. Not because rulers are necessarily malicious, but because concentration makes malice easy and accountability hard. John Locke, writing in his Second Treatise of Civil Government (1690), held that even sovereign power carries a trust relationship toward the governed; breach that trust and legitimacy dissolves. The Defence Production Act, which the Pentagon threatened to invoke against Anthropic (effectively compelling a private company to produce technology on the government’s terms), is precisely the kind of unchecked executive authority those frameworks were designed to resist. One legal analyst described Title I of the Act as “a more straightforwardly Soviet power,” allowing the government to directly command the production of industrial goods.

And here the questions multiply. Can an AI company set limits on how the government uses its technology? A traditional defense contractor cannot tell the Pentagon where to point the missiles. But AI ethics, as it turns out, are set through contracts, procurement rules, and actual behavior, not just declarations of principle. Anthropic was not refusing to help the military. It was refusing to help the military do two specific things it believed were dangerous and unconstitutional. As Amodei put it in an interview with CBS News: “One of the things about a free market and free enterprise is different folks can provide different products under different principles.”

A federal judge, hearing Anthropic’s lawsuit on March 24, appeared to agree with at least part of this. Judge Rita Lin called the Pentagon’s restrictions “troubling” and questioned whether the supply chain risk designation was designed to punish Anthropic for expressing its principles in public rather than for any genuine national security concern. “I don’t know if it’s murder,” she said, “but it looks like an attempt to cripple Anthropic.”

Judges rarely deploy the word “murder” in passing.

Speedy, Still Circling

Asimov’s robot Speedy was eventually freed from his loop by a human who put himself in danger, forcing the first law to override everything else. The solution required someone to take a risk in order to break the deadlock.

There is something in that worth holding onto. The deadlock we are watching now, between the power of AI, the demands of the state, and the duties owed to the public, will not resolve itself. It requires people in positions of power to be willing to take a hit. Balkin’s laws are elegant, but they are still theoretical. The fiduciary duties he describes do not yet exist in statute. The public duties he outlines are not yet enforced by any court. What we have, for now, are the choices individual people make when the deadline is 5:01 p.m. on a Friday, and the consequences are real.

One CEO held. One CEO blinked. And the rest of us are still watching Speedy circle.

By Vishmila Fernando (Junior Researcher, LIRNEasia)
Vishmila is a member of the Data, Algorithms and Policy (DAP) team at LIRNEasia – which participates in the policy dialogue around our algorithmically-inclined society with critical research and technical expertise.


References

  • Isaac Asimov, Runaround (1942), collected in I, Robot (1950)
  • Jack M. Balkin, The Three Laws of Robotics in the Age of Big Data, 78 Ohio State Law Journal 1217 (2017) – SSRN
  • Reason, Anthropic CEO Refuses Pentagon Demands to Remove Safeguards on Military AI (Feb. 27, 2026) – reason.com
  • OPB/AP, Anthropic Refuses to Bend to Pentagon on AI Safeguards (Feb. 27, 2026) –opb.org
  • Tech Policy Press, A Timeline of the Anthropic-Pentagon Dispute (updated March 2026) – techpolicy.press
  • MS NOW / NBC, Federal Judge Calls Pentagon’s Ban of Anthropic ‘Troubling’ (March 24, 2026) – ms.now
  • TIME, How Anthropic Became the Most Disruptive Company in the World (March 11, 2026) – time.com
  • CNN, OpenAI Strikes Deal with Pentagon Hours After Rival Anthropic Was Blacklisted (Feb. 27, 2026) – cnn.com
  • Fortune, Sam Altman Says OpenAI Renegotiating ‘Opportunistic and Sloppy’ Deal with the Pentagon (March 3, 2026) – fortune.com
  • CBS News, Anthropic CEO Says He’s Sticking to AI “Red Lines” (Feb. 27, 2026) – cbsnews.com
  • The Conversation, From Anthropic to Iran: Who Sets the Limits on AI’s Use in War and Surveillance? (March 2026) – theconversation.com
  • NPR, Anthropic Sues the Trump Administration Over ‘Supply Chain Risk’ Label (March 9, 2026) – npr.org

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

*