Neuroscience, Moral Psychology, and the Homunculus Fallacy by Don Collins Reed

I was taught as a boy that vision involves an image entering my head through my eyes. As the image is conducted through the lenses, it is flipped upside down and projected stereoscopically onto the back of my head. My brain then has to re-flip the image and interpret it. Putting the matter this way, however, assumes the presence of a little man inside my head who sees the image projected as if onto a screen – who would in turn have to have a little man inside his head, and so on ad infinitum.

This is called the “homunculus [or ‘little man’] fallacy.” Neuroscientists scoff at such a lack of sophistication.

On the other hand, we sometimes hear neuroscientists say things like the following: “Just as the CEO of a corporation delegates different tasks to different people occupying different offices, your brain parcels out different jobs to different regions” (V.S. Ramachandran, 2011, The Tell-Tale Brain, p. 95). This brain-as-bureaucracy metaphor is not far from a little person watching the screen at the back of your head, or an entire bureau of such little people, with a master homunculus as CEO.

With the homunculus fallacy in mind, please answer Question #1:

Which is more correct?

  1. Your eyes are reading this sentence. (Also, your anterior insula and/or limbic system may be feeling wary of a trick at this point in the blog post.)
  2. Your brain is reading this sentence, using input from your eyes. (Perhaps your brain is sounding an alarm: “danger Will Robinson!”)
  3. You are reading this sentence, through activity of your eyes and several visual, motor, and language processing pathways in your brain. (You were right to be suspicious, through processing in your brain’s anterior insula and/or limbic system. The question is loaded.)

Worried that we might look like pre-scientific animists if we explicitly attribute “will” to persons, we over-compensate and shift the real action down a level or two. To avoid implying there is a god in the mechanism, we assign organism functions to a physical organ or organ sub-system (eyes & visual cortex, anterior insula & limbic system, etc.). But this way of talking implies there is a little organism in the organ.

We do this in moral psychology when, for instance, we attribute judgments or reasoning to localized brain functions. The dorsolateral prefrontal cortex makes moral judgments, or the anterior insula detects norm violations. But brains and their functional units do not judge, reason, feel, or act. The organisms whose brains they are do.

Still, moral psychologists and philosophers don’t need to be wary of all neuroscientific inquiries into moral and ethical functioning. Three disciplines contribute to our knowledge of the way underlying processes in the brain mediate overall person functioning, for instance, in visual processing (which we understand pretty well) or in moral functioning (which we’re only beginning to work out). The three disciplines are neurology (correlating brain lesions with loss of mental function), neurophysiology (monitoring the activity of specific neurons or neuron clusters under certain mental tasks using electrodes), and brain imaging (using techniques such as EEG and fMRI to monitor which parts of the brain are active during certain mental tasks).

There’s no fallacy in recognizing that an organism can’t perform normal tasks if its organs and organ systems aren’t functioning normally – and then trying to work out what normal functioning involves.

For instance, we typically cooperate only when and to the extent that we trust our fellow cooperators and believe the proceedings will be fair. What are the micro-level processes underlying and mediating macro-level trust and perception of fairness? In her recent book, titled Braintrust: What neuroscience tells us about morality, Pat Churchland (2011) summarizes research linking the oxytocin-vasopressin network with willingness to trust and activation in the anterior insula with the perception of uncooperativeness or unfairness (see pp. 71-81).

In the Trust game, a trustor or investor is given $12 at the beginning of the game (or ¥12 or €12, etc.). She then has an opportunity to donate $0, $4, $8, or $12 to an anonymous trustee, which is tripled before the trustee receives it (e.g., an $8 donation becomes $24). Then the trustee has an opportunity to send money back to the investor to begin another round (e.g., $12 of the $24 received). Each can walk away with their current holdings at any point, but the most lucrative strategy is for the two to act as if each will continue so that total assets increase through succeeding rounds.

Artificial elevation of oxytocin levels through nasal spray (internasal oxytocin) has been found to increase both the frequency with which trustors donate more than $0 and the average amount of money donated, relative to control groups. However, internasal oxytocin has no significant effect on trustee back-donations (presumably because the trustee is responding to the trustor’s indication of trust rather than deciding whether to trust). Also, the perceived unfairness of very small offers (in this and the Ultimatum game) is correlated with elevated activity in the anterior insula, both in trustees when trustor donations are low and in trustors when trustee back-donations are low.

Churchland’s main thesis in Braintrust is that “morality originates in the neurobiology of attachment and bonding, [which] depends on the idea that the oxytocin-vasopressin network in mammals can be modified to allow care to be extended to others beyond one’s litter of juveniles….” (p. 71). But whatever you think about that contention, you need not reject the more basic assumption: macro-level person functioning (judging, reasoning, feeling, acting, etc.) is mediated by specific micro-level neural processes (among others), and normal person functioning requires normal neural processes.

With all this in mind, please answer Question #2:

Which is preferable?

  1. Loose talk that suggests that the agency exhibited by complex organisms is actually exhibited by their organs,
  2. Rejection of neuroscience in moral psychology for fear of losing moral agency altogether, or
  3. Acceptance that moral psychology requires broad-based interdisciplinary inquiry.

We should forego both (1) and (2) and admit that moral and ethical functioning are more complex and multi-layered than hitherto may have been dreamt of in our philosophy.


Don Collins Reed is Professor of Philosophy at Wittenberg University, Springfield, OH.

Opinions expressed in these Op Ed pieces are solely those of the author and not intended to represent AME. AME chooses to publish pieces that will foster discussion on issues related to moral psychology, philosophy, development, and education.