Predictivie Partnership

P

So there’s this notoriously mind-bending idea popularized by the neuroscientist Karl Friston called the “Free Energy Principle,” the technical details of which are far beyond my comprehension… but if you listen to its devotees talk about it, it starts to sound suspiciously close to a “grand unified theory of everything,” so you know, grain of salt. I’m not even going to try to explain that, just be aware that’s the umbrella under which – to build on the discussion of the “action/perception cycle” from this post – we find the theories of predictive coding and active inference, which apply more directly to human relational behavior. Like the “reinforcement learning” paradigm discussed elsewhere, these theories understand human behavior functionally, and deterministically, but replace the idea of “reinforcement” with the (less intuitive, but even more elegant and profound) concept of “prediction error minimization.” Rather than continuing to bumble around with introductory abstractions, though, let’s just jump in.

Predictive Coding and Active Inference

To start at the beginning, with the second law of thermodynamics, any thing in the universe must “resist entropy” in order to remain an observable thing for any length of time.  This process is generally labeled homeostasis: the characteristic of systems such as living organisms that keep themselves within survival-compatible conditions by actively responding to changes in the environment in order to “return” themselves to equilibrium when the chaos of the external world perturbs them out of their preferred conditions.  If an organism’s environment is “too cold” for instance, the organism must take steps to seek or generate heat in order to remain alive.

From this basic premise, (or so the idea goes) brains have evolved as a technology for “predicting” what the environment is likely to be like in the future, and how it is likely to respond to the organism’s behavior, ideally enabling organisms to act effectively based on guesses about the state of the environment in implicitly and statistically-anticipated future. So mechanically, what the brain does, is to calculate probability-based predictions about how the environment is likely to behave, based on evidence gathered from past experience. For reference, this is the “Bayesian Brain Hypothesis” which also implies says that we don’t perceive the environment directly; what we perceive and experience is not the world—instead, it’s a “guess” about what the world is probably like, based on limited and imperfect sensory information, that our brains are trying to make sense of as efficiently and parsimoniously as possible, or in other words, “sloppily.”  

So, the “goal”  of this process is to “minimize uncertainty,” in the sense that my perceptive behavior amounts to generating and constantly updating a “model” of how the world works.  At the same time, my active behavior is a “model” of what works in the world; my collective actions are a kind of “strategy” or a “guess” about what will optimize my survival.  (As an aside, the process of evolution as a whole can be understood similarly, if you look at species or individual organisms as “guesses” about what sort of organism will work well in a given environment.) Bringing active and perceptive behavior together, I have a perceptual model of the world that is gradually shaped by experience to “align” with the external world, while my active behavior acts upon the environment to shape the environment toward alignment with my predictions about it.  My “uncertainty” about the world is minimized when I both shape my model of the world to fit my sensory observations, and act to shape the world to fit my model. 

This all depends on the fact that some “beliefs” may be essentially hard-coded, with my brain coming pre-programmed with beliefs like I will remain alive, such that my “survival behavior” simply functions to minimize the uncertainty generated by any discrepancies between the world I observe and the likelihood of my belief in my continued existence.  In other words, like all organisms, I act with the goal of being as certain as I can be that I will remain conscious and content, based on the unique definition of “contentment” prescribed collectively by my genetics and life experience.  More generally, this principle of uncertainty minimization parallels the “reinforcement” paradigm in essentially renaming “reinforcing outcomes” as “preferred states”—preferred because they correlate with – or otherwise support – the wellbeing of the organism in question.

Relational Modeling

As a species, we are quite adept at the practice of imagining or adopting perspectives other than our own, which provides a range of advantages, from the ability of ancient humans to make good guesses about things like what I would do in this situation if I were an antelope but also allows us to effectively cooperate and compete with other members of our own species. 

The “predictive models” that human beings build of other human beings are astonishing in their sophistication and complexity, though this shouldn’t be surprising given our deeply “social” nature, as a species.  Typically, the better we “know” someone else, the more deep and detailed our model of their mind becomes, to the point that  – as the cognitive scientist Douglas Hofstadter argues in his book I Am a Strange Loop – we are literally running “simulations” of other people’s minds in our own brains.   When you ask yourself what your mom or your friend or your partner would say or think about what you’re doing right now, you’re functionally simulating their mind.  By effectively building a model of another mind inside our own—the voice of a parent, or a coach, or abuser—we can continue to “be in relationship” with them (for better or worse) even when physically separate.

As I’ve said before, in a relationship, your partner often represents your most significant “environment” such that it is them and their behavior which you are “statistically modeling.” In other words, your partner is the “uncertainty” that you are trying to minimize, through both perceptive “prediction,” and active “control.” On an implicit, unconscious level, then, your behavior necessarily causes your partner to behave in line with your predictions—even if the subjective experience is of being afraid they’re going to respond that way.  In my mind, this is a usefully concise explanation for the often-frustrating “inertia” we experience in relationships, where established behavioral patterns often persist despite our strong (conscious) desire to change them, such that we can feel powerless in the face of “self-fulfilling prophecies” we had little choice but to prophecei.

Venturing even deeper into this simulated hall of mirrors, we also do a fair amount of simulating our partners’ simulations of us—we tend to be quite concerned with how others see us, even if we’d rather that weren’t the case. You likely use your partners’ behavior to inform your simulation of their simulation of you, such that their behavior tends to work like a kind of mirror—and you’re bound to have feelings and opinions about how this image corresponds to your own image of yourself, and how it compares to the image you’d like them to have of you, and the way you think you should be seen.  For example, if your partner acts afraid of you, you may see yourself as scary in their eyes, which might feel unfair and unjustified, make you angry, and potentially prompt you to behave in an aggressive (i.e. “scary”) fashion. 

At the end of the day, these simulations of other minds are simply a component of our overall “strategy” for environmental prediction and control, in service of the overall task of minimizing uncertainty.  On the other hand, the idea that we go through our lives carrying around versions of other minds with which we are in constant, private interaction has complex implications for how we behave generally, and especially for how we interact with the “real” people whose minds we are busy simulating.  So though your internal model of your partner’s mind is foundationally based on your experience with them, it is also necessarily colored by your individual psychology, and how you “need to understand them” in order to maintain the coherence of your collective working model of the world. 

Intimately interrelated with the function of “punishment” discussed in this post, a main utility of our models of our partners, is to simulate their perspectives on our behavior in order to minimize “punishing” prediction errors… meaning part of what I’m doing when I’m seeing myself through your eyes is looking out for behavior of mine that you might not like, so that I can essentially “pre-punish” myself in the safety of my own mind in hopes of avoiding the “real” punishment you might dish out if if I behaved badly in real interaction with you.  Depending on my sensitivity to punishment, I might model your perspective on my behavior more or less “conservatively,” such that I may find myself imagining you as “meaner” than you actually are, so that I can be “extra safe” when it comes to punishment-avoidance.

Engineering Prediction Error

Given the basic aim of minimizing uncertainty, and the fundamentally homeostatic nature of “life” itself, there are formidable forces at work in our nature that resists (even “positive”) attempts at change. For the sake of efficiency and parsimony, our models of ourselves, our partners, and the world “want” to stay the same, which obviously presents an obstacle to the goal of making things better in our relationships.  

According to the predictive coding framework, our implicit models are continuously updated in response to “prediction errors” –  surprises or new experiences – which our active and perceptive behavior together unconsciously function to make as rare as possible.  If we’re trying to upset a “bad” interpersonal equilibrium, then, we need to find a way to intentionally encounter “prediction errors,” or in other words prove ourselves wrong, by effortfully seeking out convincing counterevidence to our existing beliefs.  A bit like Odysseus ordering his crew to bind him to the mast for the chance to safely enjoy the sirens’ song, we may have opportunities to consciously arrange for our environment to “surprise” us, especially if our partners are also on board. 

So what would it mean to “engineer prediction error” in our relational systems?  How might you and I – if we found ourselves locked into a self-reinforcing, unpleasant-but-stable behavioral pattern – go about arranging to surprise ourselves? 

Ultimately, that’s where your ingenuity as an engineer in confronting your unique relationships comes into play, but here are some starting points, which I’ll probably keep updating periodically:

  • There’s a thought that mindfulness – the process of consciously turning one’s attention to intentionally observe one’s own experience – may be an example of “engineering prediction error” in individual cognition, as one is able to thoughtfully compare expectations such as “this is intolerable” to the felt reality—often something like “this is unpleasant but survivable.”  As this applies in relationship, developing a habit of intentionally observing and questioning your experience of – and responses to – your partner, you may find yourself more open to being surprised.
  • Paraphrasing a post from a mentor of mine, the practice of making, and keeping agreements – collaborating to literally “change the rules” of the relational environment – may work to generate beneficial errors by virtue of preventing the predicted situation from occurring, “outlawing” predicted responses, or “forcing” you into new situations that provide surprising experiences.
  • The Strategic Therapy camp including Milton Erikson, Cloe Madanes, and Jay Hayley, has this idea of “paradoxical interventions,” or “prescribing the symptom,” in which you intentionally try to make the problem worse.  The hope is that, in one way or another, this shakes things up enough to break the cycle.   Use at your own risk.
  • One of Gottman’s’ famed seven principles, “turn towards your partner instead of away” serves as a good example of applying intentional effort to create an opportunity to have a “surprising” experience.
  • I’ll be going more into this in a larger discussion of attachment theory, but often our behaviors attempting to reduce the uncertainty of “attachment threat” – the state of an attachment figure being either literally or figuratively “too close” or “too far away” (or both at once) – tend to “make the problem worse,” when taking into account how your partner responds to those behaviors.  The common advice is to intentionally fight your instinctslet go if you’re anxious or lean in if you’re avoidant… but easier said than done, obviously.

Sources

Gregory Bateson, (1972) Steps to an Ecology of Mind

Gregory Bateson (1979) Mind and Nature: a necessary unity

John Gottman (1999) Seven Principles for Making Marriage Work

Karl Friston, Thomas Parr, & Giovanni Pezzulo (2022) Active Inference: the free energy principle in mind, brain, and behavior

Douglas Hofstadter (2007) I Am a Strange Loop

Mark Solms (2021) The Hidden Spring: a journey to the source of consciousness

About the author

Ben Cornell, Psy.D.