AI Ethics: Machines That Do Not Decide
A model does not 'choose' — it draws samples from a probability distribution. When we hold it responsible like a moral subject, what exactly are we doing?
We often hear “the AI decided.” A loan was denied, a face was not recognised, an application was filtered. But a model does not decide. It samples from a probability distribution. This distinction is where the entire ethics discussion lives.
What counts as a decision
A decision is a conscious choice among alternatives. Three conditions are required:
- Awareness of alternatives — distinguishing at least two options
- Evaluation — comparing them against a criterion
- Responsibility — recognising the agent behind the outcome
A transformer does none of these. A softmax emerges from weight matrices, sampling happens, tokens are produced. It computes, it does not decide.
Why, then, do we use the language of decision?
Two reasons:
First, attributing agency is a human reflex. We project intent onto anything that moves — Michael Tomasello calls this “shared intentionality.” Even a leaf in the wind.
Second — and more dangerously — attributing agency is useful for distributing responsibility. The sentence “the algorithm decided” hides the chain “the person who trained the model, the person who chose the training data, the person who set the threshold, the person who deployed it” behind a pronoun.
The grammar of “the decision”
Notice:
“The system rejected the application.”
Subject: the system. Verb: rejected — active, intentional. But in reality:
- A company chose this system
- An engineer set the threshold at 0.72
- A product owner wrote the “auto-reject” business rule
- A training set carried certain biases
As long as the grammatical subject remains “the system,” these four agents are never held accountable. Language has erased responsibility.
Reinstating responsibility
An ethical AI conversation must begin with the question: who is the chain of humans behind this decision?
Five links:
- Data collector — which data do I include, which do I exclude?
- Labeller — which pattern do I accept as “correct”?
- Trainer — which loss function do I optimise?
- Deployer — in which context do I use this model?
- Auditor — how do I measure outputs, how do I correct them?
A rejection’s ethics cannot be attributed to any one link alone. But no link is absolved. Ethics is the management of distributed responsibility.
The “moral machine” fallacy
Some researchers propose making the machine a moral subject — encoding values, computing preferences. MIT’s “Moral Machine” experiment is a popular example.
This is a fallacy, because:
- Morality is a property of an embodied existence (pain, ageing, death)
- Morality is relational — recognition and the moment it fails
- Morality is not computable — criteria shift, are contextual, self-contradictory
Loading “values” into a model replaces value with a value-like constraint layer. Looks similar; is not the same.
Conclusion
AI ethics is not the question “how do we make the machine good?” That question is wrong. The right question is: “Who is responsible for this system’s outcome, how do we make that visible, how do we account for it?”
Debating the morality of machines is a way to avoid debating the morality of people. And that excuse is currently stretched very wide.
“What cannot name the agent cannot govern the act.”
AI ethics is also grammar critique. Who is the subject of the sentence? Start there.
Related reading: Mittelstadt, B. (2016). The ethics of algorithms. · Birhane, A. (2021). The impossibility of automating ethics. · Butler, J. (2005). Giving an Account of Oneself.