Tag: behavioural

[Books] Wiser – how group decisions can let us down

I’m sure you can picture the scene. A group sits down to make a critically important decision. Much discussion follows. One person after another lays out their views, lots of points are made. There are some clear areas of agreement. Nods all round. The answer starts to become clear. With a few changes a consensus develops. The decision is made. Everyone feels good, confident. this is definitely the right decision. No doubt. We all agree.

But is it?

Many people will have experienced or heard about situations where things didn’t quite go to plan and it transpired the group blundered. The term groupthink has become relatively well know.

This excellent book written by Cass Sunstein and Reid Hastie starts with a simple observation and a simple question: In many fields we endow groups of people with the authority or responsibility to make key decisions.

Do groups usually correct individual mistakes?

 

The simple answer is that they do not, and they can even amplify mistakes. This basic insight has great relevant to pension funds, investment committees and all sorts of other groups tasked with making meaningful decisions in complex domains.

In this highly readable book the authors take us through a quick tour of the taxonomy of “bugs” within group decision making, however their approach is balanced – also laying out the ways in which groups might be thought to do better than individuals, and the circumstances in which they can.

Understanding how and why groups blunder is not staggeringly complex, but requires a focused and methodical examination of human nature and biases, with social influences playing a big role throughout. Unpacking some of the sources of group failure in this way starts to yield immediately actionable insights on how to correct for these issues. The authors also helpfully guide readers through a number of real-life experiments that support the points they make.

 

Individual and Group judgements

 

We as individuals use judgement heuristics (rules of thumb), and have biases. We can be overconfident and place too much weight on our own experience and opinions. These behavioural traits are well known on an individual level. When we get together to debate and make decisions in a group sense these can result in “garbage in garbage out”.

Individual confidence tends to increase after a group deliberation. Deliberative groups (those that deliberate before arriving at a view) can be overconfident and wrong, this can have serious consequences in government policy, corporate strategy and for institutional investors including pension funds (tasked with making the investment decisions for large pools of invested assets).

In Defence of Groups – Wise Crowds?

Surely groups ought to be:

  • At least as good as the most informed member: if that individual can make their case persuasively or clearly, others will realise their own errors and get behind the better informed viewpoint – eg “why are all manhole covers round?”
  • Groups ought to be able to aggregate information effectively to get a fuller picture than held by any individual – particularly if they contain no experts but a range of dispersed information
  • Synergy: the give-and-take of group discussion might lead the group to sift information in a way that uncovers insight that the individuals would not have reached by themselves.

Is there evidence that these dynamics function in practice?

In practice there are four key reasons why groups fail, and this is really the central insight of the whole book

 

  1. Groups fail to successfully aggregate info shared by members, then focus on information that is widely shared by the members rather than that known by only one or two members

  2. Groups become polarized: adopt a more extreme position than the average of the members pre-deliberation

  3. Groups fall victim to decision making cascades. Whereby early opinions excessively influence direction of decision

  4. Groups amplify the individual biases of group members

 

Let’s draw a distinction between different types of group and different types of problem:

 

Statistical v deliberative groups: statistical groups each independently contribute a point estimate of an unknown variable (eg, the temperature of a room). Deliberative groups discuss the answer to a particular problem. Most of the issues with groups occur with deliberative groups.

 

“Eureka” problems are ones where the true answer, once voiced is immediately obvious to the rest of the group (“why are manhole covers round?”). Problems with an outcome which is certain and measurable (eg the temperature of a room) are different to those where outcomes are uncertain and not immediately measurable (eg investment decisions).

 

It is clear that the decisions taken by investment committees and trustees frequently fall into the toughest category where group failures are most likely!

 

Information Sharing – the Common Knowledge Trap

Groups often risk falling into the common knowledge trap – common information that is held by multiple group members is given more weight than it ought to be, and significant information held by only one or two members can be ignored.

 

Self-silencing is a big threat to effective group decision making.

There can often be social pressure or subtle penalties to speaking out, especially if what the individual has to say is jarring or disruptive. In practice this effect can depend on the self-confidence, and subtly on the status of the individual involved meaning that men, women, minorities and certain occupations will all experience this differently

 

Polarization

Like minded groups, post deliberation can often get into a more extreme position than any of them started in pre-deliberation. This is most clearly visibile with respect to politiical affiliation. The authors cite interesting studies that show that groups of left-of-centre or right-of-centre individuals will tend to adopt more extreme positions post-deliberation than their average pre-deliberation, and will tend toward greater consensus in the more extreme position. Why does this happen?

Individual opinions can turn more extreme when corroborated by others, and confidence can also increase once an individual learns their view is shared by others. Social pressures/forces will cause members to adjust, at least slightly, to the dominant position.

Polarization doesn’t always lead away from the right answer of course, if the members of the group are individually leaning toward the right answer then the group polarization is likely to produce a decisive swing to the correct view. However groups badly blunder when they polarize toward an incorrect answer, becoming more confident in the incorrect answer in the process.

Cascades.

The human being is at root a social animal, language may well be the most subtle and engaging social mechanism in the animal kingdom – and we are wired to synchronise with other humans from birth. Hence what others do or say will influence what we do or say. What can easily happen is that subsequent speakers may defer to the opinion of earlier ones, and later speakers, hearing two or more people state the same belief may assume these beliefs were arrived at independently (and therefore have higher reliability). The authors describe an interesting experiment where subjects consistently make obviously false statistical judgements, being influenced by what earlier subjects stated.

If consensus is prized, and known to be prized, then self silencing is more likely.

Amplification

Groups often amplify natural human biases such as availability (if something can be easily called to mind, it is considered more likely), representativeness (if someone superficially appears to fit a particular mould, we are likely to judge them as being more suitable) , framing and egocentric bias. The planning fallacy, overconfidence bias.

Why? Informational influences and social pressures are again at work.

Having understood the ways in which groups blunder, the authors guide us through ways we can make groups function better – I discuss this in part 2 here.

Books – The Undoing Project

My beachside reading on the recent winter trip to Australia was the excellent “The Undoing Project” by Michael Lewis.
Obviously when it comes to Michael Lewis expectations are high, both for the quality of the writing and depth of the research behind it. This is no exception. Some of the specific elements are familiar but Lewis does  great job of weaving the intellectual content of the Kahneman/Tversky collaboration into a compelling story about their lives and the contemporary history of the time. Which are plenty interesting in their own right. I’d say the only negative points would be an oddly-placed chapter at the start which rehashes many of the ideas from MoneyBall (it was interesting, just seemed oddly placed relative to the rest of the book) and the slight lack of compete chronological sense of order that comes with the style of hopping around and pursuing digressions. It probably makes the book more readable, to be honest, but I found myself having to go back and review sections to get the full Kahneman/Tversky timeline over the years straight in my mind.
Some of the key behavioural science insights of Kahneman and Tversky that Lewis covers and articulates so well include the following.
Kahneman and Tversky understood that the errors the mind made offered you at least a partial insight into the mechanism behind decision making. A bit like optical illusions offering an insight into the workings of vision.
“Features of similarity” Comparing two objects: the mind tends to make a  list of features, count up and compare the features that two objects have in common, in particular one object with reference to the other. for example Tel Aviv is frequently thought to be like NYC but NYC is not thought to be like Tel Aviv. NYC has more noticeable features than Tel Aviv. An absence of a feature is also a feature. “Similarity increases with the addition of common features, or the absence of distinctive features.
Transitivity in decision making. transitivity violated if someone picks tea over coffee, coffee over hot chocolate and then turns around and picks hot chocolate over tea. The features of similarity model helps explain why people will violate transitivity in this way. The context in which a choice is presented affects the choice. When presented with a choice people aren’t assessing each object on a linear scale and evaluating relative to some representative model of ideality, they are essentially counting up features they notice. but the context in which a choice is presented can have a big effect on the features that are noticeable. for examples two Americans meeting in NY vs meeting in Togo. “The similarity of objects is modified by the way in which they are classified”.
First heuristic – representativeness. When people make judgements they compare whatever they are judging to some model in their minds. How closely do the approaching clouds represent my mental model of a storm? How closely does Jeremy Lin represent my model of an NBA basketball player? It’s why players with Man Boobs don’t get selected in the NBA. It’s not that the rule of thumb is always wrong – in many ways it can work quite well. But when it does go wrong it does so in systematic ways.
Second heuristuc – availability. the more easily you can recall a scenario to mind the more “available” it is, and the more probably we find it to be. For example words starting with K vs words with K as the third letter. Again can often work well. But not in situations where misleading examples come easily to mind.
People predict by making up stories
People predict very little and explain everything
People live under uncertainty whether they like it or not
People believe they can tell the future if they work hard enough
People accept any explanation as long as it fits the facts
The handwriting was on the wall, it was just the ink that was invisible.
Man is a deterministic device thrown into a probabilistic universe
Theory of regret – emotion linked to “coming close and failing”. it skewed decisions where people are faced between a sure thing and a gamble. regret is associated with acts that modify the status quo. The pain is greater when a bad decision led to a modification of the status quo vs one that led to a retention of the status quo. Regret is closely linked to responsibility – the more control you felt you had.
Anticipation of regret is actually as powerful as regret itself. We look at a decision and anticipate the regret we might feel. Often we do not experience actual regret as it is too difficult to be sure of the counterfactual.
This all contravened expected utility theory  (which was a central part of some economic models of how individuals made decisions). Expected utility theory wasn’t just wrong, it couldn’t defend against contradictions. The Allais paradox was a good example that violated utility theory. it basically had two examples framed at different probability levels but with the same utility tradeoff underlying both of them, people chose differently depending on the framing of medium odds vs long odds.
A greater sensitivity to negative outcomes – a heightened sensitivity to pain was helpful for survival. A happy species endowed with infinite appreciation of pleasures and low sensitivity to pain would probably not survive the evolutionary battle.
Prospect theory – people approach risk very differently when it comes to losses rather than gains. risk seeking in the domain of losses and risk averse in the domain of gains. people respond to changes rather than absolute levels. but changes vs some reference point, some representation of the status quo. In experiments this is usually clearly definable, in the real world, not so much.
People also do not respond to probability in a straightforward manner. people will pay dearly for certainty. But they will treat a 90% probability as less likely than that (they do not treat a 90 chance as nine times more likely than a 10 chance). When it comes to small probabilities they do not treat a 4% chance as twice as likely than a 2% chance. if you tell someone one in a billion they treat it more like one in ten thousand – and worry too much about it (and pay more than they ought to rid themselves of that worry).
One consequence of prospect theory is that you should be able to alter the way people approach risk (risk seeking vs risk averse) by presenting problems framed in terms of losses rather than framed in terms of gains.
The endowment effect (Thaler) – people attach a strange amount of extra value to what they own (compared to what they don’t). they fail to make logical trades and switches.
The Undoing Project. The title itself refers to a theory similar to regret: counterfactual emotions, the feelings that spurred peoples’ minds to spin alternative realities. The intensity of emotions of “unrealized reality” were proportional to two things: the desirability of the alternative, and the possibility of the alternative.
Experiences that led to regret and frustration were not always easy to undo. Frustrated people needed to undo some feature of their environment, whereas regretful people needed to undo their own actions. but the basic rules of undoing are the same, they require a more or less plausible path to an alternative state. Imgination wasn’t a flight with limitless possibilities, rather it’s a tool for making sense of a world of unlimited possibilities by limiting them. The imagination obeyed ruled: the rules of undoing. The more items that were required to undo the less likely the mind would undo them. “the more consequences an event has the larger the change that is involved in eliminating the event.” also, an event becomes gradually less changeable the more it recedes in time.