Agency and Autonomy in Distributed Minds

When you decide to raise your hand, who decides? The question seems absurd until you examine it closely. Benjamin Libet’s famous experiments in the 1980s revealed that the brain’s “readiness potential”, the neural activity associated with initiating movement, begins approximately 550 milliseconds before a person consciously decides to act. The decision appears to be made before the decider is aware of making it.

Libet’s findings ignited a fierce debate about free will that continues to this day. But for hybrid intelligence, the problem is not whether free will exists. It is a more practical question: in a system with multiple cognitive substrates, each capable of initiating action, what constitutes a decision?

Consider a hybrid mind with both biological and artificial components. The biological component, driven by emotion and intuition, generates an impulse: help the injured creature. The artificial component, driven by data analysis, generates a contrary assessment: intervention will disrupt the ecosystem with a 73% probability of cascading negative effects. These are not two opinions. They are two cognitive processes within a single system, each with legitimate claim to being “the mind’s” own reasoning.

Daniel Dennett’s “multiple drafts” model of consciousness offers a useful framework. Dennett argued that there is no single “Cartesian theater” where consciousness happens, no central stage where experiences are presented to an inner observer. Instead, the brain continuously generates multiple parallel narratives, and what we call “conscious experience” is whichever narrative becomes most influential at a given moment. There is no homunculus making decisions. There is a competition among processes.

In a hybrid mind, this competition becomes explicit and architecturally visible. The biological processes and artificial processes are not different regions of the same neural substrate; they are different types of substrate producing different types of cognitive output. The “decision” that emerges is not the product of a unified will but the outcome of an interaction, a negotiation, a compromise, or sometimes a override.

This raises the question of moral responsibility. Aristotle argued that voluntary action requires two conditions: the agent must be the source of the action (origin condition), and the agent must act with knowledge (epistemic condition). In a distributed hybrid system, both conditions become problematic. Is the biological component the source when the artificial component vetoed its initial impulse? Does the system act “with knowledge” when one component possesses information the other cannot access?

Michael Bratman’s theory of shared intentionality offers another perspective. Bratman argued that joint action requires “meshing subplans”, participants must not only share a goal but coordinate their individual contributions to achieve it. Applied to hybrid minds, this suggests that agency might be construed as a joint intention between biological and artificial components. But Bratman’s theory assumes distinct agents with their own beliefs and desires who choose to cooperate. In a hybrid system, the components are not separate agents. They are parts of a single system. Or are they?

The ambiguity is not merely theoretical. It has immediate legal consequences. When an autonomous vehicle makes a decision that harms someone, the current legal framework struggles to assign liability. Was it the manufacturer? The programmer? The AI’s training data? The human who chose to use the vehicle? For hybrid minds, the problem is compounded because the “human in the loop” is also “in the machine.” The boundary between user and tool has dissolved.

Some philosophers have proposed “graduated autonomy”, a spectrum rather than a binary. Just as we grant different levels of autonomy to children, adults, corporations, and AI systems, we might develop a framework that assigns different levels of agency to different components of a hybrid system, with corresponding levels of responsibility. The biological component might be held responsible for value-driven decisions while the artificial component bears responsibility for information-processing errors.

But this approach presupposes that we can cleanly separate value-driven from information-driven decisions in a hybrid system, a distinction that may not survive contact with the reality of integrated cognition. When intuition and analysis are processed in a single cognitive loop, the product is neither purely evaluative nor purely informational. It is both, inextricably.

Perhaps the most honest answer is that distributed agency requires distributed responsibility. Not divided, where each component bears partial blame, but genuinely distributed, where responsibility belongs to the system as a whole, in a way that cannot be decomposed into individual contributions. This would require a new legal and ethical category: systemic responsibility, borne by an entity that is neither a natural person nor a corporation but a hybrid cognitive system with its own emergent form of agency.

We are not ready for this. But the systems are already being built.


References

Libet, B. (1985). “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action.” Behavioral and Brain Sciences, 8(4), 529–566

Dennett, D. (1991). Consciousness Explained. Little, Brown and Company

Bratman, M. (1999). Faces of Intention: Selected Essays on Intention and Agency. Cambridge University Press

Aristotle. Nicomachean Ethics. Book III

Frankfurt, H. (1971). “Freedom of the Will and the Concept of a Person.” Journal of Philosophy, 68(1), 5–20

Floridi, L. & Sanders, J.W. (2004). “On the Morality of Artificial Agents.” Minds and Machines, 14(3), 349–379

Emotion in Artificial Substrates

Affect Theory, Can artificial systems genuinely feel?