A mixture of task & communication checks help manage the problem of proliferating complexity in the modern word – that’s the relatively simple premise of Atul Gawande‘s short, but excellent book on checklists – The Checklist Manifesto.
The book is driven mainly from a medical context, that being the author’s background, and centred around the astounding data from a study supported by the World Health Organisation into the power of checklists. Although the context is broadened to be applicable to many facets of modern life- examples and applications are also cited from construction, aviation and even finance. The humble checklist can dramatically improve baseline performance – perhaps more so than even the best new drugs or surgical technologies.
Gawande draws a key distinction between two types of error: (1) errors of ignorance (where we don’t know enough) and (2) errors of ineptitude (failing to correctly apply what we do know. Most of the failures in the modern world are of the second kind.
What were the key insights?
Well, here’s a checklist –
My beachside reading on the recent winter trip to Australia was the excellent “The Undoing Project” by Michael Lewis.
Obviously when it comes to Michael Lewis expectations are high, both for the quality of the writing and depth of the research behind it. This is no exception. Some of the specific elements are familiar but Lewis does great job of weaving the intellectual content of the Kahneman/Tversky collaboration into a compelling story about their lives and the contemporary history of the time. Which are plenty interesting in their own right. I’d say the only negative points would be an oddly-placed chapter at the start which rehashes many of the ideas from MoneyBall (it was interesting, just seemed oddly placed relative to the rest of the book) and the slight lack of compete chronological sense of order that comes with the style of hopping around and pursuing digressions. It probably makes the book more readable, to be honest, but I found myself having to go back and review sections to get the full Kahneman/Tversky timeline over the years straight in my mind.
Some of the key behavioural science insights of Kahneman and Tversky that Lewis covers and articulates so well include the following.
Kahneman and Tversky understood that the errors the mind made offered you at least a partial insight into the mechanism behind decision making. A bit like optical illusions offering an insight into the workings of vision.
“Features of similarity” Comparing two objects: the mind tends to make a list of features, count up and compare the features that two objects have in common, in particular one object with reference to the other. for example Tel Aviv is frequently thought to be like NYC but NYC is not thought to be like Tel Aviv. NYC has more noticeable features than Tel Aviv. An absence of a feature is also a feature. “Similarity increases with the addition of common features, or the absence of distinctive features.
Transitivity in decision making. transitivity violated if someone picks tea over coffee, coffee over hot chocolate and then turns around and picks hot chocolate over tea. The features of similarity model helps explain why people will violate transitivity in this way. The context in which a choice is presented affects the choice. When presented with a choice people aren’t assessing each object on a linear scale and evaluating relative to some representative model of ideality, they are essentially counting up features they notice. but the context in which a choice is presented can have a big effect on the features that are noticeable. for examples two Americans meeting in NY vs meeting in Togo. “The similarity of objects is modified by the way in which they are classified”.
First heuristic – representativeness. When people make judgements they compare whatever they are judging to some model in their minds. How closely do the approaching clouds represent my mental model of a storm? How closely does Jeremy Lin represent my model of an NBA basketball player? It’s why players with Man Boobs don’t get selected in the NBA. It’s not that the rule of thumb is always wrong – in many ways it can work quite well. But when it does go wrong it does so in systematic ways.
Second heuristuc – availability. the more easily you can recall a scenario to mind the more “available” it is, and the more probably we find it to be. For example words starting with K vs words with K as the third letter. Again can often work well. But not in situations where misleading examples come easily to mind.
People predict by making up stories
People predict very little and explain everything
People live under uncertainty whether they like it or not
People believe they can tell the future if they work hard enough
People accept any explanation as long as it fits the facts
The handwriting was on the wall, it was just the ink that was invisible.
Man is a deterministic device thrown into a probabilistic universe
Theory of regret – emotion linked to “coming close and failing”. it skewed decisions where people are faced between a sure thing and a gamble. regret is associated with acts that modify the status quo. The pain is greater when a bad decision led to a modification of the status quo vs one that led to a retention of the status quo. Regret is closely linked to responsibility – the more control you felt you had.
Anticipation of regret is actually as powerful as regret itself. We look at a decision and anticipate the regret we might feel. Often we do not experience actual regret as it is too difficult to be sure of the counterfactual.
This all contravened expected utility theory (which was a central part of some economic models of how individuals made decisions). Expected utility theory wasn’t just wrong, it couldn’t defend against contradictions. The Allais paradox was a good example that violated utility theory. it basically had two examples framed at different probability levels but with the same utility tradeoff underlying both of them, people chose differently depending on the framing of medium odds vs long odds.
A greater sensitivity to negative outcomes – a heightened sensitivity to pain was helpful for survival. A happy species endowed with infinite appreciation of pleasures and low sensitivity to pain would probably not survive the evolutionary battle.
Prospect theory – people approach risk very differently when it comes to losses rather than gains. risk seeking in the domain of losses and risk averse in the domain of gains. people respond to changes rather than absolute levels. but changes vs some reference point, some representation of the status quo. In experiments this is usually clearly definable, in the real world, not so much.
People also do not respond to probability in a straightforward manner. people will pay dearly for certainty. But they will treat a 90% probability as less likely than that (they do not treat a 90 chance as nine times more likely than a 10 chance). When it comes to small probabilities they do not treat a 4% chance as twice as likely than a 2% chance. if you tell someone one in a billion they treat it more like one in ten thousand – and worry too much about it (and pay more than they ought to rid themselves of that worry).
One consequence of prospect theory is that you should be able to alter the way people approach risk (risk seeking vs risk averse) by presenting problems framed in terms of losses rather than framed in terms of gains.
The endowment effect (Thaler) – people attach a strange amount of extra value to what they own (compared to what they don’t). they fail to make logical trades and switches.
The Undoing Project. The title itself refers to a theory similar to regret: counterfactual emotions, the feelings that spurred peoples’ minds to spin alternative realities. The intensity of emotions of “unrealized reality” were proportional to two things: the desirability of the alternative, and the possibility of the alternative.
Experiences that led to regret and frustration were not always easy to undo. Frustrated people needed to undo some feature of their environment, whereas regretful people needed to undo their own actions. but the basic rules of undoing are the same, they require a more or less plausible path to an alternative state. Imgination wasn’t a flight with limitless possibilities, rather it’s a tool for making sense of a world of unlimited possibilities by limiting them. The imagination obeyed ruled: the rules of undoing. The more items that were required to undo the less likely the mind would undo them. “the more consequences an event has the larger the change that is involved in eliminating the event.” also, an event becomes gradually less changeable the more it recedes in time.
- Hack your own productivity, figure out what works for you
As “knowledge workers” we all carry out a wide variety of different cognitive tasks each day: some are repetitive, some are simple but require a high degree of accuracy, some are creative while others involve problem solving or co-ordination of others. Some involve significant willpower while others may not.
Finding individual ways to maximise our own productivity can be hugely helpful – I firmly believe that the productivity of knowledge workers can easily vary by a factor of 4 or 5 times depending on various factors and circumstances, and some of these are quite simple to understand and change.
Things like choosing which tasks to take on at different points in the day, selecting the appropriate space to work in (working from home being great for some tasks, bad for others), harnessing and using your willpower most effectively and balancing requirements to meet and consult with others with working individually. Creating focus on what’s important (rather than simply urgent), and avoiding cognitive switching.
I was influenced in a lot of this thinking by Charles Duhigg‘s excellent book Smarter, Faster better which I discussed in more detail here. Mitesh Sheth also wrote up this excellent list of productivity hacks, which I contributed to.
2. Approach the world as it is, not as you’d like it to be
2016 was a year of surprises and shocks at a macro political level. Some of the events that took place challenged the world views of people – including myself. The result of the EU referendum left many people – myself included- feeling more than a little frustrated and angry.
One positive I take from this is the opportunity it presents to acquire really valuable wisdom and experience – for those people open enough to be able to move past the frustration and approach the world as it is.
The reality is, disruptive events will create both opportunities and challenges. Spending time fighting the way the world is probably isn’t the best use of precious resources of mental energy and focus.
3. Understand the Building Blocks of Change
Changing habits at work is hard. Rolling out new systems and processes and changing old ones. It’s so vital to keep operating efficiently, but the extra burden to individuals of change in the short term will also be resisted.
This great blog by Mckinsey helped me greatly in my understanding of the 4 key requirements for workplace change:
- An understand of why change is necessary
- The capability to make the change
- The alignment of incentives and rewards
- Role modelling by senior and influential individuals
There is a lot of overlap here with takeaways of books such as Nudge and Inside the Nudge Unit. All fascinating and really powerful stuff if you can find ways to implement day to day. It feels like behavioural insights are rightly having more and more impact on policy & decisions across organisations as knowledge and appreciation of the field grows. Great to see this happening and I look forward to more insights in 2017.
4. Beware the Narrative Fallacy
The hearing and telling of stories is fundamental to who we are as humans. It’s hard-wired into us. It’s part of how we understand and make sense of an uncertain world. It was the way our ancient ancestors explained things to each other and kept children away from danger. We are fundamentally inclined to believe convincing stories.
But there’s a problem, far too often in today’s world stories are constructed that ascribe too great a role to intrinsic characteristics such as talent and too little to luck. Stories dwell on the one thing that worked, ignoring the many that didn’t. Stories can easily make us fall prey to the availability or representative bias, skewing our decision making systematically in unhelpful ways.
Making effective decisions therefore, involves getting beyond stories into data, asking the right questions, and seeking evidence (where it can be found). Testing theories, rejecting hypotheses, trying to assess against a counterfactual and learning as much from the trials that didn’t work as those that did.
2016 was the fifth year-end that I’ve been a part of the team at Redington. As we close one year and start a new one it’s a great opportunity to say thankyou to all my fantastic colleagues who genuinely keep life interesting and make it worth getting up for work each morning – which is what really matters, isn’t it? Here’s to a great 2017 and beyond.
The best book I read over the summer was Matthew Syed’s Black Box Thinking.
The central theme of the book is really fear of, and reaction to, failure.
We have an allergic aversion to failure. We try to avoid it, cover it up and airbrush. The phenomenon of cognitive dissonance is the name for the deeply-rooted behavioural trait that causes us to naturally reject ideas or even evidence that conflicts with our own worldview. This can be incredibly damaging to progress in many cases.
There’s a huge need to learn from failure, it can be extremely helpful (Syed cites the example of the aviation industry learning from air disasters to vastly improve the safety record).
Readers of similarly themed books (eg the work of Charles Duhigg, Khoi Tu or even David Eagleman) will find a lot of the examples used by Syed a little tired and overdone by now. However I found that Syed was able to extract sufficient new insight from some of these well trodden case studies and weave them together with the central theme effectively.
Creating by experimenting is often more effective than creating by blueprint.
Cognitive dissonance / confirmation bias
There are some powerful behavioural psychological forces at play that can be quite counter-productive to progress in today’s world. For example, the tendancy of reframing when faced with evidence that we’re wrong … divorces us from the pain of recognising that we were wrong. It’s not even conscious when it happens.
For example –
Open vs closed loops.
Syed defines open loops as operating by benefiting from feedback for example aircraft black box & medical randomised control trials.
Closed loops do not systematically collect feedback AND more seriously do not have the mindset to confront, recognise and learn from failure. Closed loops systems are dangerous because they block progress – whether that be progress in safety, improving care, innovation or surviving in the commercial world.
Evolution itself is the best example of learning from failure.
Narrative fallacy Vs RCT
Narrative fallacies arise inevitably from our continuous attempt (need) to make sense of the world around us. The explanatory stories that people find compelling are simple and concrete. But they often assign a greater role to talent/stupidity/intentions than to luck and often rely on a few events that did happen rather than the countless that didn’t.
Stories are good but beware the narrative fallacy. Eg scared straight. Statistical biases.
Need counter factual and control group.
Marginal gains & feedback loop
Marginal gains is not about making small changes and hoping they fly. Rather it is about breaking down a big problem into small parts in order to rigorously establish what works and what doesn’t.
Break a performance into a component parts & you can build back up with confidence > brailsford
Some programs are hard to create controlled trials for, eg aid to Africa. Break down into component parts >> marginal gains
The existence of a local maximum reveals the inherent limitation of marginal gains. Sometimes you need a big leap forward to get past a local maximum. Need to do both marginal gains and big-picture thinking.
Contradictory information jars us psychologicaly. It nudges us into looking for unusual connections. Innovation comes from making new connections between familiar things.
Find a hidden connection to solve a problem. Failure and epiphany are linked. Brilliant ideas can emerge from engagement with a problem for months or years.
Innovation is context dependent – a response to a particular problem at a particular time & place.
Big picture & small picture. Innovation + discipline = success. There exists a threshold level of innovation required for a firm to be successful, beyond that it depends on the discipline to implement.
After really enjoying Charles Duhigg‘s excellent first book The Power of Habit, it was an easy choice for me to grab a copy of his second book on the way to my holiday, having been given a recommendation from Mitesh. I wasn’t disappointed!
Essentially the book looks to explore ways in which we as individuals and teams can be more creative and productive. Some of the points and ideas are familiar, but I really like Duhigg’s style of weaving neatly summarised academic literature into memorable real world stories and characters.
Here’s my top 4 takeaways-
1. Single most important tip for teams: build a commitment culture through fostering psychological safety.
A commitment culture (as opposed to a star culture) is one where the whole team is genuinely committed to helping each other reach a common goal. Psychological safety means that each person on the team can speak up, contribute ideas, and critique ideas. Everyone gets their turn to speak and meetings are not dominated by a couple of individuals.
2. Motivate yourself and others by making choices that put you in control.
It turns out that the need for control is pretty fundamentally hard wired into us from an early age (I don’t have kids, but my friends that do tell me this is something they experience frequently from the age of about 2). A perceived lack of control over a difficult of demanding task can be stifling for our motivation (just ask any student approaching revision for exams!) But as adults we can use this to our advantage. For ourselves, approaching draining or difficult tasks can be easier if we start by framing a choice (choose the location for a difficult meeting, taking control of your availability). When managing others it can be incredibly powerful and motivating to pass control to them – allow them to take key decisions relating to the project. This also ties in with agile principles.
3. Build mental models
Our brains are set up to build models of the world around us and constantly evaluate information received against the model (David Eagleman writes more on this in his excellent book on neuroscience, incognito). We can harness this in a working environment by constantly building a model of how we expect a given day, interaction, meeting or project to play out. Evaluating what happens in reality relative to this model can help better decision making. In the book, Charles Duhigg uses some excellent contrasting examples of aviation incidents to really bring this home.
4. Make data disfluent, in order to understand it better
We live in a world that has never been richer in terms of data. We each generate huge amounts of data everyday and carry in our pockets devices capable of processing data that previous generations couldn’t dream of. But how do we turn that data into actual information and insights?
Paradoxically a great way of doing this is often to go back to basics. Make the data harder to interact with at first. Draw graphs by hand, write datapoints out longhand on flash cards. By doing this we are forced to interact with the data more. We build theories about what the data contains, and in testing these theories we learn the important lessons. This is also why taking handwritten notes can be more powerful than typing notes, precisely because it is MORE labour intensive.
There’s plenty more in this very readable book, granted not all the concepts are revolutionary, but I would be surprised if you couldn’t find a few real actionable insights to take away and apply day to day. I certainly did and I look forward to implementing these with my team.
Feel like I’ve managed to read a decent amount in 2015, as always would like to have read more though!
With a bias to non-fiction, here are the 3 the books that really stood out for me in 2015.
1. The Success Equation (Michael Mauboissin)
I wrote about this one in more detail here, but in short I loved the approach the author took in laying out various quantitative frameworks for distinguishing the roles of skill and luck (most were illustrated using sports data). There were a number of interesting takeaways for finance.
2. The Girl on the Train (Paula Hawkins)
Hardly an original choice, given this book was riding high in the best seller lists for most of the year. I don’t read much fiction, but couldn’t put this one down. I also recommended it to several other other people who all ended up feeling the same. A real thriller brilliantly told from several perspectives, I really felt like I got to know the characters. If you are one of the few people that hasn’t already read this then I recommend you get your hands on a copy asap. I am certainly waiting keenly for Paula Hawkins next novel.
3. Incognito (David Eagleman)
I seem to be reading a lot of books about meta-cognition (thinking about thinking) recently. Unsure if it’s just a “phase” or reflective of a glut of books being published on the subject. Anyway, I found this one, which ranges over a wide area of neuroscience hugely interesting, difficult to put down and really well written. There were real “aha” moments on each page and at no point did I feel “bogged down” by weight of thought as I sometimes do with these sort of books. I think Eagleman does a great job of keeping the subject matter readable and accessible, making good use of examples and stories where appropriate.
The main takeaways or themes of the book I would say are as follows:
Our perception of the reality around us isn’t quite what we think it is, and by understanding the way the brain creates this perception we can understand how it can be led astray.
A huge amount of our behaviour is governed by automated neuro-programs that are “burned down” into the circuitry of our brains, with little or no access from the conscious level (and this is much more efficient)
This calls into question the extent to which free will is actually “free” (are we making a conscious choice, or responding in a pre-programmed way)
This poses challenging questions for the legal system, which currently operates on the assumption that humans more or less start out the same. Perhaps as our understanding of neuroscience evolves we will need to revisit the principles behind the legal system.
One of the amazing things about our brain’s evolution is the flexibility to conquer new problems and “burn-down” into our unconscious neuro level the programs for solving them – so that they become automatic (such as learning to drive or ride a bike)