Agency Is Frame-Dependent: A New Perspective on AI Agents

AI
System Thinking
AI Concepts
Author

Quang Ngoc DUONG

Published

February 12, 2025

We work with complex systems every day. Think about a self-driving car navigating traffic, an automated inventory system reordering stock, or even a simple thermostat keeping your office comfortable. We often talk about these systems “acting” or “deciding.” But what does that really mean? Do they possess agency – the ability to act purposefully towards a goal?

A thought-provoking paper, “Agency Is Frame-Dependent” by David Abel and colleagues, suggests that agency isn’t a simple yes/no property. Instead, they argue it fundamentally depends on how you look at the system – your analytical “frame” or perspective. Let’s unpack this idea and see why it’s crucial for anyone working with technology, especially AI and automation.

What We Usually Mean by Agency

Generally, we think of an agent as something that:

  1. Perceives its environment.
  2. Takes actions based on those perceptions.
  3. Does so to achieve a specific goal.

Examples:

On the surface, these seem like clear cases of agency. But the paper challenges us to dig deeper.

The Core Idea: Agency Depends on Your Analytical Frame

The central argument is this: Whether you consider something an agent isn’t inherent to the system itself, but depends on the perspective or “frame” you use to analyze it.

Think of a “frame” as the set of assumptions and boundaries you apply when looking at a system. Let’s use the thermostat example:

Which view is “correct”? The paper argues both can be valid, depending on the frame you adopt for your analysis.

Four Key Aspects of Agency – And Why They’re Frame-Dependent

The paper highlights four components often used to define agency, arguing each one is influenced by your chosen frame:

  1. Individuation (Defining the Boundary): Where does the “system” end and the “environment” begin?
    • Frame-Dependence: Is the thermostat just the wall unit? Or does it include the furnace, vents, and wiring? Is a software module an agent, or is the entire application the agent? How you draw the boundary changes what you’re analyzing.
  2. Source of Action (Initiation): Does the system initiate its own actions, or merely react?
    • Frame-Dependence: Is the thermostat acting, or is it simply being pushed by temperature fluctuations according to fixed rules? Does a chatbot generate a response, or simply retrieve a statistically likely pattern? Depends on whether you focus on internal mechanics or external triggers.
  3. Goals / Normativity (Purpose): Does the system genuinely have an internal goal guiding its actions?
    • Frame-Dependence: Is maintaining 22°C a real goal for the thermostat, or just a setting we imposed, with the system blindly following instructions? Can any predictable system be described “as if” it has a goal (e.g., “a rock’s goal is to roll downhill”)? Deciding what constitutes a “meaningful” goal requires an external judgment (part of the frame).
  4. Steering / Adaptivity (Responsiveness): Does the system actively adjust its behavior based on feedback to better reach its goal?
    • Frame-Dependence: Does the thermostat “adapt” by turning on/off? Or is that just a fixed, non-adaptive response programmed into it? True adaptation might imply changing the strategy (e.g., learning to pre-heat), not just reacting. What counts as meaningful adaptation depends on your criteria (the frame).

Because each of these ingredients depends on your analytical frame, the overall conclusion about whether something has agency also becomes frame-dependent.

Why This Matters for Technical & Functional Roles (Especially AI/ML)

This isn’t just philosophical navel-gazing. Recognizing that agency is frame-dependent has practical consequences:

Conclusion: Ask “From Which Perspective?” Not Just “Is It an Agent?”

The key takeaway is that agency isn’t a fixed, objective property like mass or temperature. It’s a relative concept. Instead of a simple yes/no, the more useful question becomes: “From which analytical frame does this system appear to exhibit agency, and why?”

This perspective encourages clearer thinking about the capabilities and limitations of the complex systems we build and interact with. It pushes us to be more precise in our language and analysis, especially as AI and automation become more sophisticated.

What’s your take? When you work with automated systems or AI, do you consider them agents? Does thinking about different analytical frames change how you view their behavior? Share your thoughts!

Further Reading: