Indifference, Posthuman Agency, and Critical AI Ethics
The intersection of indifference, posthuman agency, and critical AI ethics raises fundamental questions about what it means to be responsible, to act, and to care in a world increasingly mediated by AI and complex hybrid systems. Here, I will explore how indifference manifests in posthuman agency and the ethical frameworks necessary to counteract it.
1. Indifference as a Byproduct of Posthuman Agency
Posthuman agency, as theorized by figures like Rosi Braidotti, Karen Barad, and Bruno Latour, challenges traditional human-centered notions of responsibility. Instead of seeing agency as something possessed by an individual subject, posthuman theories argue that agency is distributed across human and non-human assemblages.
This redistribution of agency creates new forms of indifference:
Structural Indifference: When agency is shared between AI, algorithms, and human actors, no single entity can be held responsible. AI in warfare, for example, makes it difficult to determine who is accountable when autonomous weapons systems misfire.
Automation of Ethical Judgment: As AI takes over decision-making, human actors may defer moral reasoning to machines, assuming that data-driven conclusions are neutral or infallible.
Decentralized Control, Decentralized Accountability: In actor-network theory (Latour), agency is relational. However, in complex hybrid systems, relationships are too vast and diffuse for meaningful accountability. This leads to a situation where everyone is partially responsible, yet no one is fully responsible - a perfect condition for ethical indifference.
Posthumanism, in dissolving the traditional human subject, also dissolves traditional ethical responsibility. This is the double-edged sword of posthuman agency.
2. Indifference and the Critique of Algorithmic Ethics
Critical AI ethics (as opposed to mainstream AI ethics) challenges the assumption that technical fixes -such as fairness algorithms or ethical guidelines - can resolve the problem of indifference in hybrid systems. Scholars like Ruha Benjamin, Kate Crawford, and Antoinette Rouvroy argue that AI is embedded in historical, economic, and political structures that shape how indifference operates.
Key critiques from critical AI ethics regarding indifference:
Ethics as a Smokescreen: Many AI systems claim to be “ethical” because they include fairness checks, but these often obscure deeper systemic inequalities (Benjamin, Race After Technology).
Epistemic Indifference: AI does not “understand” suffering - it only recognizes statistical patterns. This means that even the most advanced machine-learning models lack an ethical sensibility, operating with epistemic indifference.
Predictive Control as a Mechanism of Indifference: Rouvroy’s work on algorithmic governance shows that AI nudges behavior rather than coercing it directly. This creates a new form of soft control, where people are subtly guided toward behaviors without realizing they are being manipulated. This form of governance encourages indifference because individuals feel as if no one is making decisions - things just happen through the system.
If classical ethics assumes a responsible subject, AI governance assumes an indifferent, passive user’s subject who adapts to what algorithms dictate without question.
3. Can Posthumanism Resist Indifference?
If posthumanism de-centers the human, does that mean it necessarily leads to indifference? Not necessarily. There are ways in which posthuman philosophy can counteract indifference:
A. Agential Realism (Barad) and Ethical Entanglement
Karen Barad’s agential realism suggests that entities do not pre-exist their relationships - they emerge through intra-action. This challenges indifference by arguing that:
We are always entangled with the technologies and systems we create.
Ethical responsibility does not disappear but must be reconceptualized as relational rather than individualistic.
This framework suggests that posthuman agency should not mean indifference but rather a heightened awareness of how we participate in hybrid systems.
B. Non-Anthropocentric Ethics: Levinas vs. Deleuze
Levinasian Ethics of the Other: Emmanuel Levinas argues that ethics begins with the face-to-face encounter with the Other, which demands responsibility. The problem is that AI and hybrid systems often remove the Other from view, making suffering invisible. Could an AI system be designed to recognize vulnerability in a Levinasian sense? Or does AI, by nature, erase ethical confrontation?
Deleuzian Ethics of Becoming: Gilles Deleuze, by contrast, sees ethics not as responsibility toward a fixed Other but as an ongoing process of becoming. This would suggest that indifference can be disrupted by continually reconfiguring systems in ways that foster new ethical possibilities. Rather than trying to impose human-centered morality on AI, Deleuze might advocate for lines of flight that escape rigid structures of control.
Posthuman ethics might not mean “bringing back” human responsibility but inventing new, non-anthropocentric forms of care and attention.
4. Toward an Ethics of Attunement, Not Indifference
If indifference is the default mode of hybrid systems, what is the alternative? I suggest an ethics of attunement:
Resonance over Optimization: Instead of optimizing for efficiency, AI systems could be designed to resonate with the needs and vulnerabilities of their environments (Heideggerian care rather than instrumental rationality).
Opacity as Resistance: Contrary to the push for total transparency, critical opacity (Tarcisio Celestino, Wendy Chun) suggests that individuals should have the right to be unreadable to AI, resisting predictive control.
AI as Ethical Prosthesis: Could AI function not as an indifferent judge but as a prosthetic extension of human ethical deliberation? This would mean designing systems that assist ethical reflection rather than replacing it.
Conclusion: Indifference is Not an Accident, but a Design Choice
Posthuman agency does not have to lead to indifference, but under current AI governance structures, it often does. The key question is whether hybrid systems can be reoriented toward ethical engagement rather than passive optimization.
Key Takeaways:
1. Indifference in hybrid systems emerges from distributed agency, automation bias, and predictive governance.
2. Critical AI ethics challenges the idea that fairness or technical fixes can resolve this problem.
3. Posthumanism offers both risks (dispersed responsibility) and possibilities (relational ethics).
4. Alternatives to indifference require designing AI for attunement, not optimization.
5. The ultimate ethical challenge is whether we can make hybrid systems not just intelligent but capable of care.
If Levinas asks, “Am I my brother’s keeper?, the question for AI is: Can a machine ever be a keeper at all? Or will it always be indifferent?
Would you like to explore specific case studies where these ethical dilemmas manifest in AI systems today?