Friday 30 January 2015

Philosophy of Mind and Psychology Reading Group -- The Predictive Mind chapter 12

Sam Wilkinson
Welcome to the Philosophy of Mind and Psychology Reading Group hosted by the Philosophy@Birmingham blog.

This month, Sam Wilkinson, Research Fellow in the Department of Philosophy at the University of Durham, introduces chapter 12 of Jakob Hohwy’s The Predictive Mind (OUP, 2013). This is the last in a series of posts on the book.

Many thanks to all of you who have contributed with posts and comments, and especially to Jakob Hohwy whose participation has made the reading group so interesting.


Chapter 12 - Into the Predictive Mind
Presented by Sam Wilkinson

In the twelfth and final chapter, “the prediction error mechanism is extended deep into matters of the mind” (p.242). In particular, it is applied to: emotions, introspection, privacy of mind, and the self, with each application having a section devoted to it. Hohwy readily acknowledges that these “are certainly aspects where the application of prediction error minimization [PEM] becomes more tenuous”. However, the applications are worth attempting, given the “immense explanatory scope” of the framework.

I take each section (with the applications to which they are devoted) in turn.

In “Emotions and Bodily Sensations”, the key idea is basically “interoceptive predictive processing”, namely, that emotion arises as a kind of perceptual inference on our internal states. This is neatly tied in with the James-Lange theory of emotion (recently made popular by Prinz 2004), according to which, e.g. we feel afraid because we tremble (rather than, as pre-theoretical intuition might have it, the other way around).

“Emotions arise as interoceptive prediction error is explained away” (p.243). It is not, as a classical bottom-up account (according to which inputs come in, get processed, and passed up until you get a conscious percept) would have it, the interoceptive state itself: it the hypothesis that explains it away.

One interesting upshot of this is that it allows that inference to go wrong, thereby giving rise to “emotional illusions”.

A further, attractive, consequence of the hierarchical approach is that it settles the dispute between conceptualist and non-conceptualist views of emotion. On the one hand, human emotions seem rather sophisticated, and yet, on the other, it seems that far simpler animals are capable of feeling emotions (at least in some sense). As discussed in Chapter 3, and put to work in Chapter 6 with a similarly ecumenical resolution of the cognitive penetrability debate, “the sharp distinction between percepts and concepts begins to wash out in the perceptual hierarchy” (p.243). Thus, relatively basic animals can be said to have emotions (viz. these Bayes-optimal hypotheses that explain away interoceptive prediction error), but the character of their emotional experience will not be subject to the same top-down predictions that the emotional experience of more sophisticated animals (like ourselves) will be subject to.


There is then an interesting discussion of “sense of presence” in relation to depersonalization disorder, as well as a very plausible flagging of emotional attention (although, I was surprised that this didn’t play a larger role in the account of introspection in the next section).

The next section, “Introspection is inference on mental causes”, is the only one of the four sections that I found myself slightly disagreeing with (or perhaps more accurately: slightly perplexed about). In it, Hohwy gives an account of introspection from within the PEM framework.

He rightly states that “If introspection is unconscious probabilistic inference, then introspection must be construed such that there are some hidden causes, some hidden sensory effects, and a generative model under which prediction error can occur” (p.245). He then rightly concedes that this seems like a “non-starter”, not least because there is no introspective organ, and whatever conscious experiences are, they don’t seem to be hidden causes. But then he tries to show that it isn’t quite the non-starter that it seems to be.

He starts by asking whether our experiences defy our expectations of them, and concludes that, yes, they do. (For example, surprised by the sharpness of a pain).

(At this point I’d like to register a passing query: doesn’t this risk conflating my expectations and those of my nervous system (surprise vs. surprisal)? Something can be very surprising to me, but not to my nervous system. When I very clearly see an elephant on the lawn outside my office, the correct hypothesis has been relatively easily selected by my nervous system, and yet I am pretty surprised; conversely, I am familiar with the binocular rivalry task, and the switch no longer surprises me, even though my nervous system is constantly battling to keep surprisal down.)

Then an objection is rehearsed, and there are three responses to it.

Objection: “It seems a waste of energy to experience things twice over” (p.246).

Response 1: If there are violations of experiential expectations (which there are), then there must be internal representations of experience.

Response 2: We might only notice the model of our experience when an expectation is violated.

Response 3: Introspective awareness is crucial for how minds interact.

I’m very happy with response 3 (and it is related to the next section, which I very much liked), but have some (pretty mild and half-baked) worries about 1 and 2. Basically, I struggle with the notion of an internal representation of an experience. Within the PEM framework, I thought that experience was a function of the hypothesis selected that does the best job of minimizing prediction error across the hierarchy. I get how you need second-order predictions (precision, viz. “attention”), but I don’t understand how you can have hypotheses about your hypotheses. And I guess (although I haven’t thought about it enough) that this relates to my worry with 2: I can pinch myself, and be totally accurate in my prediction about how that will feel. I can still be in a profoundly sensitive state of “introspection”, and hence (if 1 is right) be representing my experience.

That I put “introspection” in scare quotes, suggests that I’m not comfortable with it as a notion, and I suppose I’m not.
Firstly, it suffers from a product/process ambiguity. It can either mean the coming to know certain psychological facts about ourselves, however this is achieved (the product). Or it can point to a particular process, an “inner sense”, thanks to which we achieve that. If we call the former “introspection-1” and the latter “introspection-2”, we could coherently ask whether (or under what circumstances) introspection-1 (the product) is achieved by introspection-2 (the process). Hohwy goes on to say: “Any agent who represents its own actions must in some sense introspect” (p.247, my emphasis). Indeed, but in what sense? I don’t see why Hohwy doesn’t keep the product, but scrap the process.

What would this look like? I’ll probably do a really bad job of this, but here goes. It is customary in the introspection/self-knowledge debate to distinguish self-knowledge for propositional attitudes, and self-knowledge for sensations. Very few people deny that the latter is achieved by introspection (although the categorisation of sensations is a different matter), whereas it is becoming popular to be a “Neo-Rylean” about the former. Namely, we come to know what we desire, believe, intend by interpreting ourselves, by interpreting our actions, percepts, emotions, our inner speech and imaginings etc. Wouldn’t it be more conducive to think of introspection in this sense as involving self-interpretation? Especially given that work is being done within the PEM framework on mindreading (Koster-Hale and Saxe 2013), wouldn’t introspection as mindreading turned on oneself give a nice unified account that could simply make use of the apparatus already present in the framework? (e.g. the account of attention within the PEM, and mindreading viewed as predictive hypotheses about hidden causes (“mental states”)). On this view, I know that I’m in love with S (Or, indeed, believe that p) because I attend to my feelings, and interpret them, not because I go through a dedicated process of introspection.

There is then an interesting explanation of why there is “introspective dissonance”: why is there such firm and immovable disagreement between the “voice of certainty” and the “voice of uncertainty”? The former is accounted for because it constitutes a higher, more abstract, less noisy part of the hierarchy. The latter is accounted for because it involves going deeper into lower, noisier levels of the hierarchy. This seems to nicely account for, and explain away, the conflict (the hierarchy to the rescue yet again!).

“The Private Mind in Interaction” was my favourite of the four sections. The privacy of the mind poses a puzzle: If conscious experience is private, what would it be for? Hohwy’s answer, which I found both ingenious and convincing, is that “consciousness is private so that is can be social” (p.249). This is then fleshed out in terms of the Bayesian courtroom: you only get the benefit of sharing information if you are “conditionally independent”. Roughly, A’s testimony adds nothing to B’s if it is simply based on B’s. Within the brain (intra-personally), the answer to this conditional independence is modularity; outside of the brain (inter-personally) the answer is privacy.

As I said, I found this utterly convincing. It made me think of something else, too, and I’d like to see if it fits with this in some way, or if I’m barking up the wrong tree. It makes me think of Vygotsky’s theory of language and the development of thought (viz. in human development), namely, the way that linguistically structured thinking starts as overt, inter-personal communication, between child and care-giver, but that over time this becomes, first, private speech, and then finally inner speech. I suppose this “internalization” (or, better, inhibition of the overt vocalization) is more to do with social norms (embarrassment) and the obvious benefits of secrecy, than with the Bayesian courtroom.

Finally, in “the Self as Sensory Trajectory” the self is construed much in line with the intuitions of Hume (and the more overt, and recent, theorising of Metzinger). “It is a fairly deflationary idea because it reduces the self to a hierarchically described hidden cause of one’s sensory input” (p.255).

No comments:

Post a Comment