I’m sure you can picture the scene. A group sits down to make a critically important decision. Much discussion follows. One person after another lays out their views, lots of points are made. There are some clear areas of agreement. Nods all round. The answer starts to become clear. With a few changes a consensus develops. The decision is made. Everyone feels good, confident. this is definitely the right decision. No doubt. We all agree.
But is it?
Many people will have experienced or heard about situations where things didn’t quite go to plan and it transpired the group blundered. The term groupthink has become relatively well know.
This excellent book written by Cass Sunstein and Reid Hastie starts with a simple observation and a simple question: In many fields we endow groups of people with the authority or responsibility to make key decisions.
Do groups usually correct individual mistakes?
The simple answer is that they do not, and they can even amplify mistakes. This basic insight has great relevant to pension funds, investment committees and all sorts of other groups tasked with making meaningful decisions in complex domains.
In this highly readable book the authors take us through a quick tour of the taxonomy of “bugs” within group decision making, however their approach is balanced – also laying out the ways in which groups might be thought to do better than individuals, and the circumstances in which they can.
Understanding how and why groups blunder is not staggeringly complex, but requires a focused and methodical examination of human nature and biases, with social influences playing a big role throughout. Unpacking some of the sources of group failure in this way starts to yield immediately actionable insights on how to correct for these issues. The authors also helpfully guide readers through a number of real-life experiments that support the points they make.
Individual and Group judgements
We as individuals use judgement heuristics (rules of thumb), and have biases. We can be overconfident and place too much weight on our own experience and opinions. These behavioural traits are well known on an individual level. When we get together to debate and make decisions in a group sense these can result in “garbage in garbage out”.
Individual confidence tends to increase after a group deliberation. Deliberative groups (those that deliberate before arriving at a view) can be overconfident and wrong, this can have serious consequences in government policy, corporate strategy and for institutional investors including pension funds (tasked with making the investment decisions for large pools of invested assets).
In Defence of Groups – Wise Crowds?
Surely groups ought to be:
- At least as good as the most informed member: if that individual can make their case persuasively or clearly, others will realise their own errors and get behind the better informed viewpoint – eg “why are all manhole covers round?”
- Groups ought to be able to aggregate information effectively to get a fuller picture than held by any individual – particularly if they contain no experts but a range of dispersed information
- Synergy: the give-and-take of group discussion might lead the group to sift information in a way that uncovers insight that the individuals would not have reached by themselves.
Is there evidence that these dynamics function in practice?
In practice there are four key reasons why groups fail, and this is really the central insight of the whole book
Groups fail to successfully aggregate info shared by members, then focus on information that is widely shared by the members rather than that known by only one or two members
Groups become polarized: adopt a more extreme position than the average of the members pre-deliberation
Groups fall victim to decision making cascades. Whereby early opinions excessively influence direction of decision
Groups amplify the individual biases of group members
Let’s draw a distinction between different types of group and different types of problem:
Statistical v deliberative groups: statistical groups each independently contribute a point estimate of an unknown variable (eg, the temperature of a room). Deliberative groups discuss the answer to a particular problem. Most of the issues with groups occur with deliberative groups.
“Eureka” problems are ones where the true answer, once voiced is immediately obvious to the rest of the group (“why are manhole covers round?”). Problems with an outcome which is certain and measurable (eg the temperature of a room) are different to those where outcomes are uncertain and not immediately measurable (eg investment decisions).
It is clear that the decisions taken by investment committees and trustees frequently fall into the toughest category where group failures are most likely!
Information Sharing – the Common Knowledge Trap
Groups often risk falling into the common knowledge trap – common information that is held by multiple group members is given more weight than it ought to be, and significant information held by only one or two members can be ignored.
Self-silencing is a big threat to effective group decision making.
There can often be social pressure or subtle penalties to speaking out, especially if what the individual has to say is jarring or disruptive. In practice this effect can depend on the self-confidence, and subtly on the status of the individual involved meaning that men, women, minorities and certain occupations will all experience this differently
Like minded groups, post deliberation can often get into a more extreme position than any of them started in pre-deliberation. This is most clearly visibile with respect to politiical affiliation. The authors cite interesting studies that show that groups of left-of-centre or right-of-centre individuals will tend to adopt more extreme positions post-deliberation than their average pre-deliberation, and will tend toward greater consensus in the more extreme position. Why does this happen?
Individual opinions can turn more extreme when corroborated by others, and confidence can also increase once an individual learns their view is shared by others. Social pressures/forces will cause members to adjust, at least slightly, to the dominant position.
Polarization doesn’t always lead away from the right answer of course, if the members of the group are individually leaning toward the right answer then the group polarization is likely to produce a decisive swing to the correct view. However groups badly blunder when they polarize toward an incorrect answer, becoming more confident in the incorrect answer in the process.
The human being is at root a social animal, language may well be the most subtle and engaging social mechanism in the animal kingdom – and we are wired to synchronise with other humans from birth. Hence what others do or say will influence what we do or say. What can easily happen is that subsequent speakers may defer to the opinion of earlier ones, and later speakers, hearing two or more people state the same belief may assume these beliefs were arrived at independently (and therefore have higher reliability). The authors describe an interesting experiment where subjects consistently make obviously false statistical judgements, being influenced by what earlier subjects stated.
If consensus is prized, and known to be prized, then self silencing is more likely.
Groups often amplify natural human biases such as availability (if something can be easily called to mind, it is considered more likely), representativeness (if someone superficially appears to fit a particular mould, we are likely to judge them as being more suitable) , framing and egocentric bias. The planning fallacy, overconfidence bias.
Why? Informational influences and social pressures are again at work.
Having understood the ways in which groups blunder, the authors guide us through ways we can make groups function better – I discuss this in part 2 here.