Steve Fleming’s work is definitely “meta”, a Greek prefix indicating self-reference. he cognitive neuroscientist I research metacognition at University College London. That is, what we know about what we know, what we think about what we think, and what we believe about what we believe. This may seem highly philosophical and almost impossible to study in the lab, but he made it his mission to measure and model it and understand where it appears in the brain.
Fleming explored these issues in his 2021 book. Know Yourself: The Science of Self-Awareness. In 2024 Annual Review of Psychologyhe further investigated The connection between metacognition and self-confidence: our sense of whether we made the right decision, whether we succeeded in the task presented to us, and whether our worldview is probably correct.
Fleming’s research sheds new light on why some people seem chronically insecure even though they are doing well, and why others are completely convinced that they are right about everything, despite overwhelming evidence to the contrary. In the following discussion, edited for length and clarity, Fleming shared his thoughts on some of the questions that inevitably arise when our brains evaluate their own activity.
Metacognition is a very unusual research topic. How did you end up studying this?
I studied experimental psychology at Oxford, where I had the opportunity to work with psychologist Paul Azzopardi. He studies blindsight, a condition in which certain types of brain damage allow people to use visual information to perform various tasks despite being subjectively blind. This points to an interesting dissociation between conscious experience and actual functioning.
At that point, I had no idea how to connect the more philosophical ideas about conscious experience to something that could actually be measured and studied in the lab. Since then, however, my career has inched towards achieving my original goal of using psychological mathematical models to explain aspects of self-awareness. These are things that psychologists and philosophers have always been interested in, but they are very difficult to pin down in practice.
How do you measure something like metacognition in the lab?
The standard approach is to measure people’s objective performance on a task and their subjective evaluation of their own performance, usually in the form of confidence ratings. For example, you might ask whether a visual stimulus, known as a grating, leans to the left or right, or you might compare the brightness of two gratings that are presented alternately. That would be a judgment about the outside world. Then you can also ask metacognitive questions to assess their confidence in their decisions about the world.
If you make many of these types of decisions over time, you can observe how well reliability tracks performance from trial to trial. People who have high confidence when they are right and low confidence when they are wrong are considered to have high metacognitive efficiency. This can be used as a way to quantify metacognitive differences between individuals or groups.
Can we connect these differences to what’s going on in people’s brains?
One common way to do this is to use brain imaging techniques such as fMRI and magnetoencephalography to look at differences in brain activity and structure between people and see what aspects of brain function give some people better metacognition than others. However, we realized that there are limitations to that approach.
So the field changed. More recently, researchers have instead focused on the relationship between patterns of brain activity and trial-to-trial variation in how confident individuals are about the decisions asked in the experiment.
Basically, we know that there are different stages in tracking our own performance uncertainty when performing a particular task.
For example, if you are trying to discern the direction of a line, neurons in parts of the brain that are sensitive to different possible line directions fire to varying degrees, reflecting any uncertainty in what you see. Research has shown that when there is conflicting information at that level, it affects people’s confidence estimates in a test.
There is also data suggesting another, higher-level evaluation stage: There are brain regions. in the prefrontal cortex convey confidence in a more general waythat is not tied to the specific inputs it receives when performing a specific task. This process continues even after a decision is made, as the brain considers information that was not initially available. It’s as if you’re still trying to decide whether it’s right or wrong.
It seems to happen almost automatically. No external direction or conscious effort is required. When we consciously engage in metacognition and ask people to report how they feel about their performance, they appear to engage in yet another stage of processing. This involves the frontal pole regions of the human brain. This region is particularly well developed in humans compared to other primates, near the anterior part of the cortex. These areas are activated when making metacognitive inferences. used to communicate to others or consciously control behavioras we asked them to do in these experiments.
What happens when metacognition doesn’t work properly?
A pervasive sense of self-doubt is regularly associated with symptoms of anxiety and anxiety. depression. We know that people who suffer from this general feeling of self-doubt aren’t necessarily performing any task worse than the next person. So one of the mysteries we’re trying to solve is why some people don’t learn from their performance. Why are they unable to recognize that they are actually doing well and update their beliefs about their skills and abilities appropriately?
What we found is that on a test-by-test level, people with anxiety and depression are just as likely to show high self-confidence as everyone else. But there is an asymmetry in how they learn from it. They are sometimes very confident that they are doing well, but they do not incorporate those signals into their broader estimates of how well they are doing in these experiments and perhaps in their daily lives. At the same time, they can fully incorporate evidence from exams in which they were less confident in their good performance.
Interestingly, this is not the case when we give explicit feedback about their performance. When we tell them they are right, they find that they are actually working very well.
How can we apply this to help people who suffer from self-doubt?
Recent research has shown that people with high levels of anxiety have low self-confidence. get worse over time. If you search for confidence right after they make a decision, they will lose a little confidence. But if we wait a few seconds, all else being equal, they become even less confident about their previous decision. And the situation is only going to get worse.
What we’re seeing is that they’re engaging all these brain mechanisms that we talked about earlier to reflect their own decisions and actions. Now, over time, if you tend to become more anxious, those processes can make you even less confident than you would otherwise be. I spend too much time ruminating about my performance.
So one concrete piece of advice that can be extracted from these findings is that if you know you’re prone to that kind of bias, it’s best not to think about it too much once you’ve made your choice. Even if you immediately think, “Okay, yeah, that was the obvious thing to do,” just leave it at that.
What about people who are more confident than they should be? It seems quite useful in today’s society as well.
It is very interesting to think about what is adaptive at a societal level for future success. One of the hypotheses I put forward in this book is that if you have a slightly overconfident worldview and great metacognitive sensitivity that helps you realize when you’re really wrong, that can be a very powerful combination. Because, as you say, there’s a lot of research that suggests that; People who are perhaps a little overconfident do better socially. People tend to like them and want them in positions of power because they appear to be decisive.
At the same time, you don’t want people who don’t have the proper self-awareness to bluff their way to the top and into positions of power.
So I think there’s a sweet spot where you need to be a little overconfident in order to be seen as competent, but at the same time you also need to avoid being too seduced by confidence, whether it’s your own confidence or the confidence of others.
We also found that people with more open worldviews, who accept that their own view may not be the only valid one and believe it is important to listen to the views of those who disagree with them, tend to have more accurate metacognition in the kinds of tasks that can be studied in the lab. Accurate metacognition prompts us to seek out new information and update our beliefs when they may be inaccurate. In this way, there is solid evidence to suggest that these signals may help us build a more accurate worldview over time.
Is it possible to train metacognition using these types of tasks? Also, do you think it could help reduce the social tensions we are experiencing today?
I don’t think a lack of metacognition is the only reason we see it. Polarization in today’s society. But without getting into politics, our research provides some tools that people can use to develop the ability to think critically about their thoughts, knowledge, and decisions.
The obvious place to do this is education, and I believe there is a lot of potential in education. Parents and teachers implicitly encourage children to become more self-aware, but they rarely do so explicitly.
We don’t teach metacognition the same way we teach math, history, or physics. I think this may be a very powerful way to develop a more open mindset.
this article originally appeared Magazines I knowa nonprofit publication dedicated to making scientific knowledge accessible to all. register to Magazines I knownewsletter.