Monday, 28 July 2014

Philosophy of Mind and Psychology Reading Group -- The Predictive Mind chapter 6

Rachel Gunn
Welcome to the sixth post of the online reading group in the Philosophy of Mind and Psychology hosted by the Philosophy@Birmingham blog. This month, Rachel Gunn, PhD student at the University of Birmingham, presents chapter 6 of The Predictive Mind by Jakob Hohwy (OUP 2013).

Chapter 6 - Is predicting seeing?
Presented by Rachel Gunn

In Chapter 6 Hohwy asks the reader “does what we believe to some degree determine what we perceive?” (p.118). My initial reaction to this is – yes, of course it does. I believe that many perceptual experiences are cognitively penetrable. It seems straightforward that different people often see different things depending on prior beliefs. A person who believes in ghosts will see their dead mother in the reflection on a darkened window whereas the person who does not believe in ghosts will just see a weird reflection. This may be a similar finding to Hohwy’s reference to people who believe in extra sensory perception seeing more meaningful patterns than others in ‘noisy’ images (p.134).

The other day, looking at an object in the half-light I saw a small bottle or jar that looked like a tiny paint pot or maybe a pill bottle – this experience lasted for 2 or 3 seconds. At a self-conscious level I couldn’t understand what it was or why it was there, once that self-conscious (person-level) knowledge was realised “there is no paint pot/pill bottle like that in this house…” it suddenly looked like what it actually is – a connector for a garden hose. My perception – that it was a pill bottle/paint pot was altered when I applied the new knowledge – it no longer looked like a pill bottle. I wonder if this would this have worked if the knowledge had been supplied by another person – for example, if I’d seen this at someone else’s house I wouldn’t have had the knowledge “there is no paint pot/pill bottle like that in this house…” If someone said to me “…it’s definitely not a pill bottle” would that have been the right kind of additional person-level information to alter the perception? I have a strong intuition that ‘subjective’ information often comes with a higher probability than information supplied by a third party (and this might be important in cases of delusion). I’m not sure how this impacts on PEM or the notion of the Bayesian brain except perhaps to highlight that subjective probabilities are complex and perhaps impossible to grasp except in terms of, in this case, perceptual outcomes.

I’m not clear whether this example is really a change in perception or simply a change in judgement but it did seem as if the object was one thing and then it looked like a different thing. I am aware that I was confused by the object I was looking at and may have changed how I was looking at it. However, it does seem that for practical purposes I saw something different when I gained the ‘new’ knowledge. Reading on, it seems that this might fit with Hohwy’s proposal that cognitive penetrability occurs when uncertainty is high (p.137) – although I am still not sure that my example is the kind of thing he’s getting at. Hohwy also cites the duck/rabbit, old/young woman and kangaroo/whale as possible cases of cognitive penetrability (p.130).

Muller-Lyer illusion
If there are perceptual experiences that are untouched by additional knowledge or beliefs about them then these are said to be cognitively impenetrable. Hohwy uses the Muller-Lyer illusion as one such example. Even though I know the vertical lines are the same length (because I have measured them) it still looks as if one is longer than the other. The perceptual illusion is unaltered by the additional information.

As we have no way of knowing how information is weighted* at a neuronal level this neither supports nor undermines the PEM model. If neuronal mechanisms (or systems) that give us this kind of outcome cannot be altered by person level information all this tells us, if PEM is true, is that the perceptual inference that the line on the left is longer than the line on the right has a probability of1 (in Bayesian terms) so to all practical intents and purposes there’s no room for, or possibility of, revision. The popular explanation cited in Hohwy (p.125) which relates to an accumulation of priors that gives us this perceptual experience could be said to be evidence for PEM. As I understand it Hohwy proposes that in the Muller-Lyer case prediction error is supressed. The wrong kind of input (lacking the correct fineness of spatiotemporal grain) would have no impact on the Bayesian framework. I’m not sure if we need any explanation, or even if an explanation is possible (for the reason cited above*).

I don’t suppose for a second that there is any such thing as an ideal Bayesian brain (I’m not even sure what that would mean). As I understand it, Prediction Error Minimisation can only be about making the best attempt at minimising the prediction error given previous best attempts, which give us the priors. How we can ever get to any understanding of the ‘subjective’ probability of anything (at a neuronal level) is beyond me…. But perhaps that says more about my limited understanding of the topic than the possibilities of this project.

In some cases, we can alter our perception by acquiring person level knowledge and in other cases this does not happen – this is not a problem for PEM. So, as Hohwy proposes, we can accommodate cognitive penetrability and cognitive impenetrability within the PEM theory.


  1. Rachel, thanks for the post and for bringing up a key point about PEM regarding the nature of priors and subjective probabilities. Under a "vanilla" Bayes conception determining the distribution of your priors and their parameters for a particular analysis/calculation can be problematic. You can consult experts, you can look at previous attempts, you can take a reasonable guess or combine all of these sources of information. In many situations the determination of such distributions and their parameters is unfeasible, or computationally intractable. As an alternative to trying to determine distribution and parameters completely apriori there are alternatives. One alternative that PEM, at its core, relies upon is empirical Bayes. Under empirical Bayes, parameters and sometimes distributions for priors (and possibly their conjugates) are determined from empirical data, often the very data to be analysed/computed. This could be from a sub-set of the data.

    Related to empirical bayes is the idea of using uninformative priors, that is priors that have very low precision e.g. the uniform distribution which assigns equal support (weight) to any real value. These often very flat distributions, have little effect on the posterior and the posterior is more informed by the data. This kind of approach can be especially useful in iterative (or recursive) estimation schemes (like the Kalman Filter and its relatives) as a means of kick starting or bootstrapping the interations.

    Using either or both approaches can give you a principled method of determining a prior and its parameters. As Jakob has mentioned to me, empirical bayes is a nice halfway house between frequentism and subjective bayes. A way to satisfy intutions from both sides.

  2. Hi Rachel - I think the example with the paint pot vs. hose connector nicely illustrates how unsure one can be in purported cases of cognitive penetrability and perceptual misrepresentation in general. Once we perceive things right, we are often extremely uncertain what it was like to experience things wrongly just moments before. This is part of what makes it so hard to adjudicate interesting cases of cognitive penetrability. For some reason the experiences of interest seem very fleeting and unstable. It is as if the system is trying but not really succeeding very well in explaining away the sensory input under the wrong hypothesis. In this sense at least some instances of cognitive penetrability begin to seem like cases of imagery. It is interesting too that introspective confidence drops away so rapidly. I delve into some of these issues later in the book too: in Ch 7 I connect cognitive penetrability to the notion of reality testing, and in Ch 12 I speculate about the role of introspection in PEM.

    I agree with Bryan that the notion of empirical Bayes is crucial to PEM. We meet every situation already armed with priors we have extracted from the environment in previous inference. It is a mistake, which I have encountered a lot now, to assume that these priors are exclusively for the gestalt of the presented scene: if that was the case then we would not be able to perceive novel scenes. The priors occur at all levels of the perceptual hierarchy, and they crucially include priors for low level sensory attributes (line orientations, shadows, contours etc.). So even if I get presented with a rapid series of images I have never seen before there is a vast repertoire of low level priors to draw on which quickly allows me to infer the higher level hidden causes in the images. Of course, this is not a foolproof process, and sometimes brief presentation times prevent me from getting every detail right (as in the paint pot/connector case). In those cases I may need to revisit inference and optimise the precisions of the prediction error better.

    In the lab there are various things we can do to estimate people's subjective probabilities (or their biases in forming empirical priors). For example, Colin Clifford has a nice study in Current Biology 2013 showing that people have a prior belief that other people are looking at them. In our lab we are currently trying to see how people form their priors about precisions.

  3. Thanks to Bryan and Jakob for your comments. Whilst I understand the concept and the maths of Bayes (when applied to 'known' statistical probabilities) in general I have some difficulty in understanding the concept of the Bayseian brain. On the one hand it seems straightforward (even obvious) that my mental activity is some sort of iterative process that occurs as I encounter the world.... on the other hand, how do we get from single neurons to populations of neurons to significant mental activity using Bayes? Also how does 'empirical data' differ from 'subjective data' at neuronal level with regard to priors? Again, I suspect these questions say more about my lack of knowledge about this topic than any flaws in the work. I will certainly look at the study/authors you recommend and would appreciate any other recommendations - the simpler the better :-)