6533b821fe1ef96bd127c0d8

RESEARCH PRODUCT

Inferences are just folk psychology

Thomas Metzinger

subject

FallacyPhysiologymedia_common.quotation_subjectState of affairsPropositionBehavioral NeuroscienceNeuropsychology and Physiological PsychologyFolk psychologyMental representationIntrospectionNon sequiturCausationPsychologymedia_commonCognitive psychology

description

To speak of “inferences,” “interpretations,” and so forth is just folk psychology. It creates new homunculi, and it is also implausible from a purely phenomenological perspective. Phenomenal volition must be described in the conceptual framework of an empirically plausible theory of mental representation. It is a non sequitur to conclude from dissociability that the functional properties determining phenomenal volition never make a causal contribution. I have offered an alternative interpretation of some of Dan Wegner’s most relevant data elsewhere (Metzinger 2003, p. 506ff), and will confine myself to three conceptual points here. Wegner’s project could be further strengthened by eliminating an omnipresent version of the mereological fallacy, by adopting an empirically plausible theory of mental representation, and by avoiding certain kinds of non sequiturs. In his laudable attempt to describe and more carefully analyze the functional architecture of phenomenal volition Wegner frequently employs personal-level concepts and predicates like “interpreting” (e.g., thoughts as causes [Wegner 2002, p. 64ff]), “inference” (e.g., of an apparent causal path, p. 68ff.), or “control” (e.g., mental control, p. 310ff). The author uses such predicates and concepts simultaneously on personal and subpersonal levels of description. At one time he speaks of the whole person as interpreting, for instance, her own thoughts as causes, and at another time of an “interpretive system” on the subpersonal level (e.g., as a course-sensing mechanism, p. 317); at one time he analyzes the person as a whole exerting mental control, at another he talks about a “controlling apparatus” (e.g., p. 312), and so forth. The subpersonal readings are all fallacious: Brains – or functional subsystems of brains – don’t interpret anything, they don’t make any inferences, and they don’t exert control. Only whole persons can be directed at the meaning of certain sentences (or of sentences describing chains of internal events), thereby attempting to interpret them. Only whole persons could establish inferences between mentally represented propositions. And, only whole persons can be directed at the fulfillment conditions defining certain goal-states, that is, only whole persons can truly make an attempt at controlling a certain state of affairs. The deeper problem in the background is that one needs an empirically plausible and conceptually coherent theory of mental representation to successfully describe the architecture of phenomenal volition on a subpersonal level, that is, without committing the homunculus fallacy. Daniel Wegner does not develop such a theory, but assumes that apparent mental causation results from “interpretations” and “inferences.” Brains, however, are not inference machines, but associative engines (see, e.g., Clark 1989; 1993). Probably brains are even more than that, namely, complex dynamical systems exhibiting something like a “liquid” architecture. It has now become overwhelmingly plausible that such systems do not exhibit a critical property which Ramsey et al. (1991) have called “propositional modularity”; the fact that they represent propositional content in a way that makes individual units functionally discrete, semantically interpretable, and endowed with a distinct causal role. In this light the “inferences” underlying apparent mental causation are a leftover piece of folk psychology that has to be substituted by a suitable subsymbolic/dynamical story. Second, “inferences” and “interpretations” also are phenomenologically implausible, because none of us actually subjectively experience themselves as drawing inferences and interpreting syntactical structures before having the conscious experience of will. They are leftover pieces of folk phenomenology. As a matter of fact these two points can now be seen as a new constraint for all candidate theories of mental representation: Are they able to accommodate a fine-grained and subsymbolic analysis for the architecture of conscious volition, functionally as well as phenomenologically? Every form of phenomenal content has at least one minimally sufficient neural correlate (Chalmers 2000). This is true of every instance of consciously experienced volition too: For every such experience there will be a minimal set of neurofunctional properties that reliably activates it and which has no proper subset that would have the same effect. Many philosophers would even argue that every single instance of phenomenal volition is token-identical to this very correlate. Interestingly, in a given system, every single overt action has at least one such minimally sufficient neural correlate too. For every such action there will be a minimal set of neurofunctional properties that reliably brings it about, and which has no proper subset that would have the same effect. Dan Wegner has made a major contribution in showing how many situations there are in which behavior and phenomenal will can be dissociated in various ways, and what the parameters guiding such dissociations are. Given his data, it is a rational and plausible conclusion to assume that both kinds of sets of neurofunctional properties only loosely overlap. At times they can be instantiated in isolation. What does not follow is the proposition that – especially in nonpathological standard configurations – the functional properties determining phenomenal volition never make a considerable contribution to action control. This is a non sequitur. What we have to distinguish are cases where apparent mental causation is mere appearance, and cases where appearance and mentally represented knowledge possibly coexist. In philosopher’s jargon, we need a criterion that allows us to distinguish between those cases when conscious will is only phenomenal content, and cases where epistemic, intentional content is co-instantiated in the very same event. Let me give an example: In standard configurations the functional properties determining the fact that the experience of conscious will occurs could at the same time be a subset of exactly those functional properties that make the selforganizing dynamics of a certain, ongoing motor selection process globally available, thereby adding flexibility, context-sensitivity, integration with working and autobiographical memory, availability for attentional processing, and so forth. The “feeling” of will could then be not an illusion, but, rather, a nonconceptual form of selfknowledge – that is, the introspective knowledge that one right now is a system undergoing the internal transformation just described. Differentiating dissociation and repression

https://doi.org/10.1017/s0140525x0434015x