Complicated, subtle factors affect how we perceive risk and these can be exacerbated by the way we receive risk information. We refer to the ways we think about and react to risks as risk perception, and the processes for discussing risk as risk communication. Even when people are in the same or very similar situations, they may perceive the risks very differently. Similarly, different people listening to the same risk information will react to it differently depending on several factors that affect their perspective. The overall result of this is “the general frustration experienced by both risk managers and affected parties in conveying and understanding risk information”.[i]
It is critical that risk managers are aware of these influences which affect how people respond to risk information. We need to understand where subjectivity exists when we are conducting an assessment and to be aware of how we might trigger a reaction depending on the language we use during a discussion. This essay lays out some of the key risk perception and communication elements every risk manager should be aware of along with some suggestions as to how to mitigate them. These factors are not confined to the ‘understand’ phase of risk management but are influential at every stage in the understand / address / monitoring & react process.
Before we get started, I need to issue a health warning of sorts. This essay gets into some fairly heavy subject matter and there is a lot of social-sciences and psychology jargon. This might seem unnecessary but these are very relevant concepts which I believe really help the risk manager. Moreover, you will already have experienced and observed these behaviors – all I am doing is laying out the academic aspects. So I’m not suggesting you need to memorize all of this and if you forget the jargon, that’s OK too. As long as you have an understanding of these influences and think about how these affect your risk activities, you will be more effective as a risk manager.
Health warning over…
Paul Slovic has conducted some of the most thorough studies into the various factors that affect how we perceive and accept risk and he describes two ways in which we approach risk: risk as feeling and risk as analysis.
“Risk as feeling refers to our instinctive and intuitive reactions to danger. Risk as analysis brings logic, reasons, and scientific deliberation to bear on a risk assessment and decision making”.[ii]
This is also reflected in what is described as System 1 thinking, which is fast and unconscious, differing from deliberative, slow System 2 thinking[iii]. Our instinctive reactions have been developed over generations and usually short-circuit our attempts at analytical decision-making. However, even our attempts at more logical thought are also subject to sub-conscious influences.
These influences can be categorized in two ways: heuristics and biases.
• Heuristics are ways of doing things: processes that are ‘hard wired’ into our behavior. These behaviors have evolved over generations and are extremely difficult to overcome. Hearing someone shouting ‘fire’ or ‘snake’ triggers the fight or flight reaction which is a built-in heuristic, developed over millennia of human evolution. Heuristics are common behaviors exhibited by most people.
• Biases on the others hand are factors that influence what we think about a particular situation. Unlike heuristics which are generally shared, biases are highly individual and variable. These are sub-conscious influences driven by numerous factors, some shaped by the individual’s own experiences or situation and others which are more temporary. To add further complication, biases will change over time and individuals can be subject to multiple, potentially conflicting biases simultaneously.
So heuristics are essentially processes that influence how we think whereas biases affect what we think. Again, although we are focused on the understand phase at the moment, these same heuristics and biases will crop up in all stages of the risk management cycle.
There are many heuristics and biases that affect our perception and reaction to risk (Wikipedia lists over 100 cognitive biases) but we will concentrate on some of the most influential here. In summary, these are:
- The availability heuristic – information that has been received most recently is most easily recalled and therefore has a significant influence on our thinking.
- Anchoring – data provided immediately prior to a decision will influence the decision made.
- Confirmation bias and hindsight bias – receiving confirmation of an idea reinforces an individual’s belief that it is sound, despite any flaws in the methodology or incorrect conclusions. Retrospective application of an argument and supporting facts in this way is called hindsight bias.
- The affect heuristic – how emotional factors influence our thinking, particularly sub-conscious factors.
- Risk cultures – groups to which individuals belong that shape their sense of identity which leads to them applying a particular perspective to a situation.
As noted, this is deep stuff but if you forget the terminology for a second, you will recognize patterns of behavior and will probably recall instances where you have noticed these effects on risk perception. What we are going to do now is dig into each element a little deeper to better understand how these apply to to risk.
The Availability Heuristic and Availability Cascades
The availability and ease with which someone recalls information is an important factor when considering risk. Ease of recall can be increased by recent exposure, repetition or simply by making the information available in an accessible manner. The ease with which we can recall a piece of information is termed the availability heuristic, a ‘pervasive mental shortcut’[iv] which is important not only for risk management, but is critical in other fields such as advertising. The availability heuristic is also at work where the setting and terminology used in a survey or discussion can have a significant effect on the answers. From a risk perspective, this means that if we are conducting a workshop or risk discussion, the examples provided will be the most accessible to the audience. So if terrorism or flooding are used as examples, these elements are likely to be given heightened prominence in the answers received.
This is not just a factor at the individual level. Consider some significant event on television and note how there will be a slew of similar stories over the following days and weeks. These other events may be routine (albeit possibly terrible) occurrences but are suddenly given prominence because of the one extraordinary event. News consumers confronted with a series of stories on mine disasters or pedophiles will believe these to be greater issues than might actually be the case. This conflation of ideas and the creation of a wave of outrage is sometimes referred to as moral panic and is popular with tabloid journalism.
Instances where a concentrated period of messaging takes place can lead to availability cascades:
“social cascades, or simply cascades, through which expressed perceptions trigger chains of individual responses that make these perceptions appear increasingly plausible through their rising availability in public discourse.”[v]
Availability cascades and public outrage have the potential to change attitudes but these changes are wholly-driven by the frequency of repetition, not by any actual change in the frequency or magnitude of the offending event.
This effect is exacerbated where we have less primary reporting and more aggregated media where websites simply repeat the content of others. This increases amplification which results in a single story of a single event, being re-published or linked multiple times giving the impression of multiple incidents. Risk managers should be mindful of this when attempting to determine the scale of a problem as people may be reporting what they hear or read, rather than providing primary data. Web searching can also present a similar problem and you need to confirm that apparently separate data points come from different sources.
Grouping and social networking can also have an influence on the availability of information. People feel more comfortable surrounded by like-minded groups sharing similar world views which can then drive the places they live, eat, worship or where they get their news. This type of association creates echo chambers where preconceived perceptions are repeated this increasing their availability and providing further confirmation that these ideas are valid . This links back to availability cascades, and research shows that individual perceptions and positions become more fixed when surrounded by like-minded individuals. Another term for this is groupthink where positions harden when groups interact.
From our perspective, we need to avoid inadvertently leading a discussion in a particular direction with the information or examples we provide. We also need to determine where ideas originate from as this might influence their relevance in a risk discussion. Groupthink is also a pervasive issue during any group discussion and something we need to try to avoid.
Anchoring is closely related to the availability heuristic in the sense that the information most recently presented is easiest to recall, but anchoring refers specifically to how we respond to quantitative data. As an example, if I provide you with an age or price range in the context of a decision, the way I present that can influence your thinking. Anyone who has been to a car dealership has seen this in action where the first offer price anchors the subsequent discussions. As soon as the buyer sees the sticker price in the window, all other offers will be anchored to that number. Thus, their perception of whether or not they are getting a good deal will be based on the final offer’s relation to the initial asking price, not the actual value of the vehicle.
Anchoring, coupled with the availability heuristic, is what makes selecting the examples used in any theoretical discussion hugely important. In addition to all the other factors at play, any examples you provide are influencing your audience whether you mean to or not.
Confirmation and Hindsight Bias
The availability heuristic is closely linked to confirmation bias: ‘the more often I see something, the more convinced of it I am’. Nicholas Nasim Taleb uses the example of swans: our recurring sightings of white swans, and a lack of observed black swans, leads us to conclude that there are no black swans. In fact, these observations simply prove the existence of white swans: these don’t prove the nonexistence of black swans, only their absence from our sample. The confirmation bias therefore complements the availability heuristic – not only will we recall information more easily when it is readily available, but the confirmation of our views strengthens the validity of these views in our mind. This also leads us to reject observations that challenge our beliefs so we tend to reject data that does not confirm what we already ‘know’.
Confirmation bias can also lead to self-fulfilling prophecies: we believe that something will fail and therefore are less likely to become involved or invest. Starved of support and resources, the endeavor fails thus reinforcing our belief that we were correct. The opposite can also be true where a successful business or political candidate gathers increased support thus increasing their chances of success. In both cases, the eventual outcome is unknown at the point at which a decision was made, so a decision is only ‘right’ when a backward-looking narrative is constructed to fit the decision to events. This is referred to as hindsight bias and validating theories retrospectively can be highly misleading. This hindsight bias is also a key element of Taleb’s black swan theory where a black swan event is: an outlier, outside the realm of regular expectation; has extreme impact; and is explained by concocted arguments after the fact to make the event [appear] explainable and predictable.[vi]
For our purposes, we need to ensure that sample sets and information gathered in an assessment are broad and don’t only confirm initial impressions or preconceived notions. We should also ensure that any retrospective analysis is based upon what was known at that time, rather than applying the benefits of hindsight.
The Affect heuristic
Despite our best efforts to remain rational, non-rational, sub-conscious factors influence our views of risk, which we call the affect heuristic. Paul Slovic identified the competing influences of rational and non-rational factors in risk decision-making, and describes the affect-heuristic as a reliance on feelings and experiences for all decision-making. Importantly, while we will have occasions where our emotions are strong enough for us to be aware of their influence, more subtle emotions and less obvious experiences will also influence us subconsciously.
It is easy to see how shouting ‘fire’ or ‘snake’ or recalling a traumatic event will lead to a knee-jerk reaction, but the affect heuristic also influences how we react to information based on how it is presented. Thus, we often focus on high-profile, attention-grabbing events because these trigger an emotional response but we ignore the logical aspect that reminds us of the very low probability of these events.
The emotional effect of how information is presented is was demonstrated by Paul Slovic’s research on the affect heuristic. Slovic examined the different reaction that doctors had when presented with a violent patient’s likelihood to reoffend, noting that the doctor’s reaction appeared to depend on whether recidivism was expressed as a flat percentage or a comparative ratio. He found a recidivism rate described as ‘20%’ was more likely to lead to a release than instances where doctors were told that ‘20 of 100 similar patients reoffend’. This suggest that percentage expressions of risk may be less emotive than relative values. Therefore saying ’20 offenders’ in this example presents a more tangible idea of risk than an abstract percentage.
For risk managers, the way data is presented would therefore have repercussions on how people react. This could become a very complex matter to try to manage but one way to reduce these influences is to stick to a set format for presenting data. So if you begin with percentage values, stick with these but if you have used a relative value, use that format throughout. Although this doesn’t remove these influences, it keeps any biases constant.
We are all products of our environments and these experiences, both our own and from our social groups, significantly affect our worldview or perspective. This is important when discussing risk because of its highly subjective nature and there is often no right or wrong view of risk, only what is acceptable or not for an individual or group.[vii]
In this context, culture refers to a group with shared characteristics but this is broader than national or social ideas of culture. In this construct, shared characteristics such as religion, ethnicity, economic status, sexual orientation, age or even support for a particular soccer team could be the basis for a culture. Some cultures, such as ethnicity, can be inherited but others are self-selected when we choose to identify with a particular group. Importantly, these choices may change over time.
Individuals can also belong to or inhabit multiple cultures simultaneously which can sometimes lead to contradictory views of a situation (think of elderly conservatives who want to cut benefit expenditure but also like free bus passes). This means that not only can the same situation be viewed differently by each person, but each person can also hold multiple views simultaneously. Which of these views is more prevalent at any time, and therefore which has the greatest influence on the individual’s perspective of a risk, will depend on the subject under discussion and current circumstances.
An insightful study of these factors concerned the Carrington petrochemical complex in the UK.[viii] Observers noticed that individuals reacted to information provided about the risks arising from the plant in different ways, which prompted a major study. The researcher noted that many individuals living near the chemical plant were also workers at the plant. For these people, the proximity of the plant could be seen as both a threat (in case of an accident) and an opportunity (as the source of household income). That meant that sometimes they were reacting to information as part of the worker culture but at other times, as members of the parent culture. Other residents who worked elsewhere didn’t inhabit the worker group and just saw the plant as a hazard. This might suggest that these cultures are a conscious, rational part of our approach to decision-making and there are many times when this will be the case. However, despite our best attempts to be rational and objective, cultural biases can also be sub-conscious influences, potentially at odds with each other.
As I noted, there are well over a hundred heuristics, so what we have covered – the availability heuristic, anchoring, confirmation and hindsight bias, the affect heuristic, and risk cultures – barely scratches the surface of this topic. However, these are some of the main influences we will encounter during any risk discussion. I hope that despite the jargon and academic approach, you can recognize these factors from your own experiences and now see how the ways we view and discuss risk will have a significant influence on our work as risk managers. Again, these concepts are not confined to the understand phase of our system and will crop up again later. These concepts aren’t even confined to risk management and you will recognize the use of these techniques in many different fields from car dealerships to advertisers, by politicians and even terrorist recruiters, so this section has a wider utility than simply risk management. That said, let’s get back on topic and look at risk communication before we move on to some practical measures.
ISO 73 describes risk communication as “a continual and iterative processes that an organization conducts to provide, share or obtain information, and to engage in dialogue with stakeholders regarding the management of risk”. We have discussed the various heuristics and biases that affect how we perceive information but that’s only part of risk communication. We also need to consider how risk information is delivered. Like any other form of communication, there are ways to be more effective and to help make the communication more effective which is what we will look at in this section.
The specific objective of each piece of risk communication will differ depending on the situation, but risk communications can generally be considered under one of the following three headings:[ix]
- Academic: communicating to gather or disclose information without expectations about quality of learning or ability to influence.
- Consensus Building: communicating to build consensus.
- Behavioral Modification: communicating for the purpose of changing perceptions, attitudes, and beliefs about risks and consequently achieving behavior modifications toward risk.
Each of these can be illustrated in the context of a large construction project. Technical, empirical risk data is gathered from academic sources during the planning phase but later, consensus building is necessary to convince nearby residents that this project will benefit them. At other times, behavioral modifications may be necessary, such as encouraging people to avoid certain areas during higher-risk construction activity. If something were to go wrong, consensus building and behavioral modification will be necessary which this is something we will look at more in the repose section.
Risk communication is a two-way street: a conversation, not a lecture. However, this has only become a collaborative activity relatively recently as previously, experts would be relied on to tell people what was safe or not and explain how to address these risks. Known as the deficit model, this hierarchical approach became less effective in Western-style democracies in the latter half of the 20th Century as individuals began to lose confidence in government and question authority more readily. Even in authoritarian regimes, information democratization now means that greater inclusion in risk decision-making is necessary. This consultative, collaborative approach and is called the societal model and combines public consultation with expert opinion.
Whilst this inclusive approach is often seen as better it does have limits. ‘Not in my backyard’ attitudes and gut-reactions arising from availability cascades, can often overshadow the results of comprehensive safety investigations, particularly over emotive issues such as nuclear power. Public skepticism and growing anti-elite sentiment devalues expert findings and the prevalence of opinion-based news, rather than fact-driven reporting, adds to this mistrust. This means that a well-meaning attempt to promote inclusivity can quickly become confused and issues can stall. A degree of balance is therefore necessary when developing risk communication plans. We must ensure that there is sufficient inclusion to gather the necessary data to develop understanding but the value of each piece of consultation may have to be weighed as some opinions might have more validity than others. This is not to diminish the importance of people’s feelings and opinions which have a role in risk communication, but it is important to differentiate between feelings and facts.
Risk communication is a significant topic in its own right and we will look at this in more detail when we look at risk treatments and responding to risks. However, I want to pause this discussion here for the moment and move on to some practical steps we can take to address these biases and heuristics during the understand phase.
We have covered a lot of academic ground so far but how can we apply this in the real world?
Importantly, I don’t think you should try to compensate for suspected biases or heuristics. Be aware of each of these factors and try to remove as much of their influence as possible but don’t try to ‘weight’ someone’s answers if you suspect a particular bias is at play. Rather, try to construct risk conversations in a manner that allows as much subjectivity to be removed as possible and to overcome, avoid or isolate some of the biases mentioned above. The ultimate determination of what is or is not an acceptable risk will remain subjective but our aim is to remove as much subjective influence as possible from the assessment. This allows the final decision to be based on an assessment which is as objective as we can make it.
So how can we achieve this in practice?
The first thing I would recommend is to take your time. Rushing an assessment or a risk discussion drives people into System 1 thinking which results in fast, gut-reaction results. Plan a deliberate approach that allows time for considered thinking and discussion. If you find that you are getting answers shot back at you with little apparent thought, this smacks of System 1, gut-reaction, thinking. So slow things down: ask for examples; put things into context; ask the other person to rationalize any apparent contradictions. This might make them review what they said and elaborate or at least give you enough information to back up their assertions. You might also find that digging a little deeper can get you to the ‘real’ issues
You should also develop neutral frameworks as much as possible. This means removing as much as possible from the process that might trigger some kind of heuristic or bias. I like to use simple, abstract, sometimes silly, examples when explaining risk metrics because people can understand these but don’t try to relate them to their own situation. If you use examples that reflect real-life concerns, these will be most readily available in people’s minds and rise to higher prominence than is warranted. Similarly, if you plan to use any kind of data in your examples, try not to anchor people’s thinking with this information. This particularly relates to how you pose questions. Pose open questions and establish a set of responses to use but try not to emphasize any of these terms or values in your question. (See the risk metrics article for more on metrics and ratings. ) “So you’d agree that the threat from terrorism would cause a lot of damage? Close to $1million?” is not a great question.
On the subject of openness, keep your own mind open too. You are subject to your own biases but need to ensure that you remain as objective as possible. Structured discussions will ensure that all elements are covered, not just the most exciting or contentious topics. You will have to accept that there is a high degree of repetition in an extensive survey but a structured approach and repetitive questions will help you establish a good understanding of the organization. This repetitive digging often leads to interesting revelations that might otherwise be overlooked and also helps limit your own biases. Additionally, always try to using facts and data points to prompt your questions.
In terms of data, I also like to establish a foundation of facts before I move on to subjective issues. You can start by reviewing the ‘knowns’: the size of organization, an individual’s role, operation of the department, stated objectives. This helps move the conversation along before getting into subjective issues that might otherwise become sticky. Furthermore, you also have a set of facts to return to that might help you query answers that seem out of place or unsupported by the data. This is also a good way to gauge the individual’s reaction to the process and can help you read the room in a group setting.
This leads on to the question of individual or group discussions.
Ideally, I would try to conduct information-gathering interviews with as few people present as possible, ideally in one-on-one interviews. This avoids groupthink or people responding to the most senior or loudest voice in the room. When it comes to other activities, such as a workshop to agree a way ahead (consensus building), having everyone together is sometimes necessary. In this case, allow everyone a quick preview of the material to help them establish their own views before these are exposed to the group. Set ground-rules for the workshop and determine how views will be heard and responded to. You should act as a neutral facilitator so try to remain fact-driven which means you must support your own conclusions with data, not just defend the work because it is yours. Keep everyone focused on the workshop objectives and ensure everyone can contribute using your ground-rules as a guide. This can be easier said than done. Challenges and discussion are a necessary part of stress-testing ideas and consensus building, but this also must be fact-based and managed sensitively. Again, start with facts and data to establish some ‘known’ points that you can return to if the discussion gets pulled off course or when you need to challenge an idea that is unsupported by data. Ground-rules and your active management of discussions should keep things in check.
Finally, remember that concerns that may seem irrational to you may be wholly rational to others. Family medical history, social culture, sexual preferences, religious beliefs and myriad other factors will all influence an individual or group’s perspective and most of these are things you won’t be aware of. These perspectives or cultures might make some risks completely rational but you might not appreciate this if you are incorrectly assuming an individual or group’s perspective. Again, risk is a very subjective issue so spending some time trying to understand the other’s point of view will significantly improve your ability to understand the situation and effectively manage the risks.
Despite the length of this essay, we have only scratched the surface of these topics but hopefully this has helped explain the background to some of the behaviors you have experienced in the past. This should allow you to better identify, account for and manage the elements that affect how we think about risk – heuristics – and the influences that shape what an individual’s thinks about a particular risk – biases. Moreover, we can now differentiate between the two ways we approach risk – risk as feeling or risk as analysis – which affects nearly every aspect of risk management from understanding through to response.
We now have some techniques you can now use to account for these influences and some ideas on how to structure our risk communications, all of which will enhance your ability to conduct a risk assessment and achieve understanding.
[i] Jardine C. and Hrudey E. (1997) Mixed Messages in Risk Communication, Risk Analysis, Vol 17, No 4, 1997. Pp 489 – 498.
[ii] Slovic P. and Peters E. (2006), ‘Risk Perception and Affect’ in Current Directions in Psychological Science 15:6 , 322–325 online here accessed 4th June 2008
[iii] Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, American Economic Association, 93(5), 1449–1475.
[iv] Kuran T. and Sunstein C. (1999) ‘Availability Cascades and Risk Regulation’, in Stanford Law Review, 51 (4).
[v] Kuran and Sunstein, 1999
[vi] Taleb, NN (2007) The Black Swan: the impact of the highly improbable. New York, Random House, xxii
[vii] Adams J. (1995), Risk, Abingdon: Routledge, 9
[viii] Irwin, A. (1995), Citizen Science: A Study of People, Expertise, and Sustainable Development, New York: Routledge
[ix] Zimmerman R. (1987) ‘A Process Framework for Risk Communication’, Science, Technology, & Human Values, Vol. 12, No. 3/4, (Special Issue on the Technical and Ethical Aspects of Risk Communication), Summer – Autumn, 1987, pp. 131-137. Available online at: http://www.jstor.org/stable/689393 accessed: 2 June 2008