A Safe Harbor for AI Agents

As companies hire AI agent "employees", basic liability issues need clarity

We’re back, and Updates

Hi Readers! After a short hiatus Road to Artificia is back. This is the first issue of 2025, so let me all wish you a Happy New Year. I have a couple updates before getting to today’s post.

  • First, housekeeping: the newsletter is now being sent from “[email protected]” instead of “[email protected]”.

  • Next, I recently recorded a guest appearance on the London Futurists podcast, to talk about how society will share the spoils with humans once humans are out-competed by AI in most roles, on both cost and quality. It’s out now, and I found our conversation genuinely engaging—well worth a listen:

The topic of discussion on London Futurists was one that I wrote about in a very early issue of Road to Artificia. We explored and debated my HumaneRank proposal for post-AI wealth distribution, as well as other topics.

The original article on this topic is here:

Ok, on to today’s issue…

A Safe Harbor for AI Agents

Disclaimer: I am not a lawyer.

This year we will see a rapid expansion of the AI agent ecosystem. In the coming weeks I hope to examine various aspects of the question: What are the most important enablers to make agents grow the economy and deliver rapid benefits in areas like software, health delivery, and science?

We'll take a look at specific advances (which continue to accelerate), the need to build a more comprehensive tool calling ecosystem, and how the agent authorization space is likely to develop.

But today I want to talk about workable legal frameworks for AI agents.

Approaching a Liability Framework for AI Agents

Over the last decade software “ate the world”, remaking the delivery of services in a swath of industries like transportation, entertainment, etc. This all proceeded without encountering any true “contradictions” in the law surrounding liability for everyday actions inside companies. The automations in these software systems may have replaced or remade formerly manual processes, but they were still deterministic and overseen by teams of people. AI agents introduced into companies are different.

AI agents can make what are in practice non-deterministic decisions, driven by English language directions, with all the potential for poor specification and miscommunication that entails. In practical terms, providers will launch with a default requirement of human approval for every consequential action. This will be a pain for users, and after a short time they will opt to turn it off — at which time the provider will require an acknowledgement of full liability for the actions the user is declining to approve individually.

The breadth of actions in which liability is a concern is huge — and this reconfiguration of teams from all human to mixed human/AI will invite cases where parties on either side of an agent interaction will argue that it’s the agent technology provider that is liable, whatever contracts may say.

We’ve been here before

The Digital Millennium Copyright Act (DMCA), passed in 1998, laid a key legal pillar that enabled the development of online services in the U.S.1 Among many other provisions, its "Safe Harbour" provision in 17 U.S. Code § 512 (“section 512”) sets up a trade: a liability shield for service providers, in exchange for good-faith addressing of problems raised to the service provider.

Basically, service providers are not liable for copyright infringement for user-uploaded material on their systems, as long as they promptly remove the material when notified, and have a registered agent to receive the notifications.

Without this trade, many of the most important online services today could not operate and would probably never have existed - particularly those that provide the ability for users to publish material on those services. The concern over liability was not just theoretical2 . In Stratton Oakmont, Inc. v. Prodigy Services Co., Prodigy (an early online service) was found to be liable as the publisher of the content created by its users, because it exercised editorial control over the messages on its bulletin boards.

Simpler times, when a bill could pass the U.S. Senate 99-0

Under the prevailing law, by making good-faith “good samaritan” moderation decisions to take down certain content, service providers could become liable for all copyright violations on the system3 . That would clearly be a perverse outcome. Section 512 provided the certainty needed to build and operate online services containing user generated content.

What an 2025 AI Agent liability framework could look like

Now, over 25 years later we’re on the cusp of another transformative technology. In order to provide certainty for providers, users, and the country at large, I would argue that congress should pass an analogue to the DMCA section 512, which would clearly define a practical model for the assignment of responsibility for agent actions, and avoid leaving the question to a patchwork of inconsistent and slowly-emerging court decisions.

Such an analogue would consist of something like the following:

Subscribe to keep reading

This content is free, but you must be subscribed to Road to Artificia to continue reading.

Already a subscriber?Sign In.Not now