We work with complex systems every day. Think about a self-driving car navigating traffic, an automated inventory system reordering stock, or even a simple thermostat keeping your office comfortable. We often talk about these systems “acting” or “deciding.” But what does that really mean? Do they possess agency – the ability to act purposefully towards a goal?
A thought-provoking paper, “Agency Is Frame-Dependent” by David Abel and colleagues, suggests that agency isn’t a simple yes/no property. Instead, they argue it fundamentally depends on how you look at the system – your analytical “frame” or perspective. Let’s unpack this idea and see why it’s crucial for anyone working with technology, especially AI and automation.
What We Usually Mean by Agency
Generally, we think of an agent as something that:
- Perceives its environment.
- Takes actions based on those perceptions.
- Does so to achieve a specific goal.
Examples:
- Self-driving car: Senses cars/pedestrians (perception), brakes/steers (action), to reach Point B safely (goal).
- Thermostat: Senses temperature (perception), turns heater on/off (action), to maintain 22°C (goal).
- Factory Robot: Senses parts (perception), welds/moves them (action), to assemble a product (goal).
On the surface, these seem like clear cases of agency. But the paper challenges us to dig deeper.
The Core Idea: Agency Depends on Your Analytical Frame
The central argument is this: Whether you consider something an agent isn’t inherent to the system itself, but depends on the perspective or “frame” you use to analyze it.
Think of a “frame” as the set of assumptions and boundaries you apply when looking at a system. Let’s use the thermostat example:
- Frame 1: The “Goal-Oriented” View: From this perspective, the thermostat is an agent. It senses the environment (temperature), takes actions (heating/cooling), and actively works towards a goal (maintaining 22°C). It seems purposeful.
- Frame 2: The “Mechanical Rules” View: From this perspective, the thermostat is not an agent. It doesn’t “want” anything. It’s just a mechanism executing a simple, pre-programmed rule:
IF temp < 22 THEN turn_heater_on ELSE turn_heater_off
. There’s no real goal-seeking, just deterministic stimulus-response.
Which view is “correct”? The paper argues both can be valid, depending on the frame you adopt for your analysis.
Four Key Aspects of Agency – And Why They’re Frame-Dependent
The paper highlights four components often used to define agency, arguing each one is influenced by your chosen frame:
- Individuation (Defining the Boundary): Where does the “system” end and the “environment” begin?
- Frame-Dependence: Is the thermostat just the wall unit? Or does it include the furnace, vents, and wiring? Is a software module an agent, or is the entire application the agent? How you draw the boundary changes what you’re analyzing.
- Source of Action (Initiation): Does the system initiate its own actions, or merely react?
- Frame-Dependence: Is the thermostat acting, or is it simply being pushed by temperature fluctuations according to fixed rules? Does a chatbot generate a response, or simply retrieve a statistically likely pattern? Depends on whether you focus on internal mechanics or external triggers.
- Goals / Normativity (Purpose): Does the system genuinely have an internal goal guiding its actions?
- Frame-Dependence: Is maintaining 22°C a real goal for the thermostat, or just a setting we imposed, with the system blindly following instructions? Can any predictable system be described “as if” it has a goal (e.g., “a rock’s goal is to roll downhill”)? Deciding what constitutes a “meaningful” goal requires an external judgment (part of the frame).
- Steering / Adaptivity (Responsiveness): Does the system actively adjust its behavior based on feedback to better reach its goal?
- Frame-Dependence: Does the thermostat “adapt” by turning on/off? Or is that just a fixed, non-adaptive response programmed into it? True adaptation might imply changing the strategy (e.g., learning to pre-heat), not just reacting. What counts as meaningful adaptation depends on your criteria (the frame).
Because each of these ingredients depends on your analytical frame, the overall conclusion about whether something has agency also becomes frame-dependent.
Why This Matters for Technical & Functional Roles (Especially AI/ML)
This isn’t just philosophical navel-gazing. Recognizing that agency is frame-dependent has practical consequences:
- Evaluating AI Systems: We often call AI models “agents.” But are they agents that set their own goals (like a human), or sophisticated tools executing complex instructions towards our goals? Understanding the frame helps clarify what kind of agency (if any) we’re talking about. Is ChatGPT “understanding” and “deciding,” or executing a complex pattern-matching function (depending on your frame)?
- Responsibility and Ethics: If an AI system causes harm, who is responsible? If agency depends on the frame, assigning blame becomes more complex. Was it an autonomous agent making a bad decision, or a tool malfunctioning based on its design and programming (viewed from different frames)?
- System Design and Requirements: When designing automated systems, being clear about the intended frame of analysis helps. Are we building a system that should appear adaptive and goal-oriented (Frame 1), or a reliable, predictable mechanism (Frame 2)? This influences design choices and testing.
- Communication: When discussing system capabilities (e.g., with stakeholders or other teams), explicitly stating the perspective (“From the perspective of achieving user goals, this system acts as an agent…”) avoids ambiguity.
Conclusion: Ask “From Which Perspective?” Not Just “Is It an Agent?”
The key takeaway is that agency isn’t a fixed, objective property like mass or temperature. It’s a relative concept. Instead of a simple yes/no, the more useful question becomes: “From which analytical frame does this system appear to exhibit agency, and why?”
This perspective encourages clearer thinking about the capabilities and limitations of the complex systems we build and interact with. It pushes us to be more precise in our language and analysis, especially as AI and automation become more sophisticated.
What’s your take? When you work with automated systems or AI, do you consider them agents? Does thinking about different analytical frames change how you view their behavior? Share your thoughts!
Further Reading: