In simple terms, risk is the possibility of something bad happening.[1] Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value (such as health, well-being, wealth, property or the environment), often focusing on negative, undesirable consequences.[2] Many different definitions have been proposed. One international standard definition of risk is the "effect of uncertainty on objectives".[3]
The understanding of risk, the methods of assessment and management, the descriptions of risk and even the definitions of risk differ in different practice areas (business, economics, environment, finance, information technology, health, insurance, safety, security etc). This article provides links to more detailed articles on these areas. The international standard for risk management, ISO 31000, provides principles and general guidelines on managing risks faced by organizations.[4]
The Oxford English Dictionary (OED) cites the earliest use of the word in English (in the spelling of risque from its French original, 'risque') as of 1621, and the spelling as risk from 1655. While including several other definitions, the OED 3rd edition defines risk as:
(Exposure to) the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility.[5]
The Cambridge Advanced Learner's Dictionary gives a simple summary, defining risk as "the possibility of something bad happening".[1]
The International Organization for Standardization (ISO) 31073 provides basic vocabulary to develop common understanding on risk management concepts and terms across different applications. ISO 31073 defines risk as:[6]
effect of uncertainty[7] on objectives[8]
Note 1: An effect is a deviation from the expected. It can be positive, negative or both, and can address, create or result in opportunities and threats.[9]
Note 2: Objectives can have different aspects and categories, and can be applied at different levels.
Note 3: Risk is usually expressed in terms of risk sources, potential events, their consequences and their likelihood.
This definition was developed by an international committee representing over 30 countries and is based on the input of several thousand subject-matter experts. It was first adopted in 2002 for use in standards.[10] Its complexity reflects the difficulty of satisfying fields that use the term risk, in different ways. Some restrict the term to negative impacts ("downside risks"), while others also include positive impacts ("upside risks").
Some resolve these differences by arguing that the definition of risk is subjective. For example:
No definition is advanced as the correct one, because there is no one definition that is suitable for all problems. Rather, the choice of definition is a political one, expressing someone's views regarding the importance of different adverse effects in a particular situation.[29]
The Society for Risk Analysis concludes that "experience has shown that to agree on one unified set of definitions is not realistic". The solution is "to allow for different perspectives on fundamental concepts and make a distinction between overall qualitative definitions and their associated measurements."[2]
The understanding of risk, the common methods of management, the measurements of risk and even the definition of risk differ in different practice areas. This section provides links to more detailed articles on these areas.
Business risks arise from uncertainty about the profit of a commercial business[30] due to unwanted events such as changes in tastes, changing preferences of consumers, strikes, increased competition, changes in government policy, obsolescence etc.
Business risks are controlled using techniques of risk management. In many cases they may be managed by intuitive steps to prevent or mitigate risks, by following regulations or standards of good practice, or by insurance. Enterprise risk management includes the methods and processes used by organizations to manage risks and seize opportunities related to the achievement of their objectives.
Economics is concerned with the production, distribution and consumption of goods and services. Economic risk arises from uncertainty about economic outcomes. For example, economic risk may be the chance that macroeconomic conditions like exchange rates, government regulation, or political stability will affect an investment or a company's prospects.[31]
In economics, as in finance, risk is often defined as quantifiable uncertainty about gains and losses.
Environmental risk arises from environmental hazards or environmental issues.
In the environmental context, risk is defined as "The chance of harmful effects to human health or to ecological systems".[32]
Environmental risk assessment aims to assess the effects of stressors, often chemicals, on the local environment.[33]
Finance is concerned with money management and acquiring funds.[34] Financial risk arises from uncertainty about financial returns. It includes market risk, credit risk, liquidity risk and operational risk.
In finance, risk is the possibility that the actual return on an investment will be different from its expected return.[35] This includes not only "downside risk" (returns below expectations, including the possibility of losing some or all of the original investment) but also "upside risk" (returns that exceed expectations). In Knight's definition, risk is often defined as quantifiable uncertainty about gains and losses. This contrasts with Knightian uncertainty, which cannot be quantified.
Financial risk modeling determines the aggregate risk in a financial portfolio. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. More recent risk measures include value at risk.
Because investors are generally risk averse, investments with greater inherent risk must promise higher expected returns.[36]
Financial risk management uses financial instruments to manage exposure to risk. It includes the use of a hedge to offset risks by adopting a position in an opposing market or investment.
In financial audit, audit risk refers to the potential that an audit report may fail to detect material misstatement either due to error or fraud.
Health risks arise from disease and other biological hazards.
Epidemiology is the study and analysis of the distribution, patterns and determinants of health and disease. It is a cornerstone of public health, and shapes policy decisions by identifying risk factors for disease and targets for preventive healthcare.
In the context of public health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations.
A health risk assessment (also referred to as a health risk appraisal and health & well-being assessment) is a questionnaire screening tool, used to provide individuals with an evaluation of their health risks and quality of life.
Health, safety, and environment (HSE) are separate practice areas; however, they are often linked. The reason is typically to do with organizational management structures; however, there are strong links among these disciplines. One of the strongest links is that a single risk event may have impacts in all three areas, albeit over differing timescales. For example, the uncontrolled release of radiation or a toxic chemical may have immediate short-term safety consequences, more protracted health impacts, and much longer-term environmental impacts. Events such as Chernobyl, for example, caused immediate deaths, and in the longer term, deaths from cancers, and left a lasting environmental impact leading to birth defects, impacts on wildlife, etc.
Information technology (IT) is the use of computers to store, retrieve, transmit, and manipulate data. IT risk (or cyber risk) arises from the potential that a threat may exploit a vulnerability to breach security and cause harm. IT risk management applies risk management methods to IT to manage IT risks. Computer security is the protection of IT systems by managing IT risks.
Information security is the practice of protecting information by mitigating information risks. While IT risk is narrowly focused on computer security, information risks extend to other forms of information (paper, microfilm).
Insurance is a risk treatment option which involves risk sharing. It can be considered as a form of contingent capital and is akin to purchasing an option in which the buyer pays a small premium to be protected from a potential large loss.
Insurance risk is often taken by insurance companies, who then bear a pool of risks including market risk, credit risk, operational risk, interest rate risk, mortality risk, longevity risks, etc.[37]
The term "risk" has a long history in insurance and has acquired several specialised definitions, including "the subject-matter of an insurance contract", "an insured peril" as well as the more common "possibility of an event occurring which causes injury or loss".[38]
Occupational health and safety is concerned with occupational hazards experienced in the workplace.
The Occupational Health and Safety Assessment Series (OHSAS) standard OHSAS 18001 in 1999 defined risk as the "combination of the likelihood and consequence(s) of a specified hazardous event occurring". In 2018 this was replaced by ISO 45001 "Occupational health and safety management systems", which use the ISO Guide 73 definition.
A project is an individual or collaborative undertaking planned to achieve a specific aim. Project risk is defined as, "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project's objectives". Project risk management aims to increase the likelihood and impact of positive events and decrease the likelihood and impact of negative events in the project.[39][40]
Safety is concerned with a variety of hazards that may result in accidents causing harm to people, property and the environment. In the safety field, risk is typically defined as the "likelihood and severity of hazardous events". Safety risks are controlled using techniques of risk management.
A high reliability organisation (HRO) involves complex operations in environments where catastrophic accidents could occur. Examples include aircraft carriers, air traffic control, aerospace and nuclear power stations. Some HROs manage risk in a highly quantified way. The technique is usually referred to as probabilistic risk assessment (PRA). See WASH-1400 for an example of this approach. The incidence rate can also be reduced due to the provision of better occupational health and safety programmes.[41]
Security is freedom from, or resilience against, potential harm caused by others.
A security risk is "any event that could result in the compromise of organizational assets i.e. the unauthorized use, loss, damage, disclosure or modification of organizational assets for the profit, personal interest or political interests of individuals, groups or other entities."[42]
Security risk management involves protection of assets from harm caused by deliberate acts.
Risk is ubiquitous in all areas of life and we all manage these risks, consciously or intuitively, whether we are managing a large organization or simply crossing the road. Intuitive risk management is addressed under the psychology of risk below.
Risk management refers to a systematic approach to managing risks, and sometimes to the profession that does this. A general definition is that risk management consists of "coordinated activities to direct and control an organization with regard to risk".[3]
ISO 31000, the international standard for risk management,[4] describes a risk management process that consists of the following elements:
In general, the aim of risk management is to assist organizations in "setting strategy, achieving objectives and making informed decisions".[4] The outcomes should be "scientifically sound, cost-effective, integrated actions that [treat] risks while taking into account social, cultural, ethical, political, and legal considerations".[43]
In contexts where risks are always harmful, risk management aims to "reduce or prevent risks".[43] In the safety field it aims "to protect employees, the general public, the environment, and company assets, while avoiding business interruptions".[44]
For organizations whose definition of risk includes "upside" as well as "downside" risks, risk management is "as much about identifying opportunities as avoiding or mitigating losses".[45] It then involves "getting the right balance between innovation and change on the one hand, and avoidance of shocks and crises on the other".[46]
Risk assessment is a systematic approach to recognising and characterising risks, and evaluating their significance, in order to support decisions about how to manage them. ISO 31000 defines it in terms of its components as "the overall process of risk identification, risk analysis and risk evaluation".[4]
Risk assessment can be qualitative, semi-quantitative or quantitative:[4]
The specific steps vary widely in different practice areas.
Risk identification is "the process of finding, recognizing and recording risks". It "involves the identification of risk sources, events, their causes and their potential consequences."[3]
ISO 31000 describes it as the first step in a risk assessment process, preceding risk analysis and risk evaluation.[4] In safety contexts, where risk sources are known as hazards, this step is known as "hazard identification".[47]
There are many different methods for identifying risks, including:[48]
Sometimes, risk identification methods are limited to finding and documenting risks that are to be analysed and evaluated elsewhere. However, many risk identification methods also consider whether control measures are sufficient and recommend improvements. Hence they function as stand-alone qualitative risk assessment techniques.
Risk analysis is about developing an understanding of the risk. ISO defines it as "the process to comprehend the nature of risk and to determine the level of risk".[3] In the ISO 31000 risk assessment process, risk analysis follows risk identification and precedes risk evaluation. However, these distinctions are not always followed.
Risk analysis may include:[48]
Risk analysis often uses data on the probabilities and consequences of previous events. Where there have been few such events, or in the context of systems that are not yet operational and therefore have no previous experience, various analytical methods may be used to estimate the probabilities and consequences:
Risk evaluation involves comparing estimated levels of risk against risk criteria to determine the significance of the risk and make decisions about risk treatment actions.[48]
In most activities, risks can be reduced by adding further controls or other treatment options, but typically this increases cost or inconvenience. It is rarely possible to eliminate risks altogether without discontinuing the activity. Sometimes it is desirable to increase risks to secure valued benefits. Risk criteria are intended to guide decisions on these issues.[49]
Types of criteria include:[48]
The simplest framework for risk criteria is a single level which divides acceptable risks from those that need treatment. This gives attractively simple results but does not reflect the uncertainties involved both in estimating risks and in defining the criteria.
The tolerability of risk framework, developed by the UK Health and Safety Executive, divides risks into three bands:[50]
There are many different risk metrics that can be used to describe or "measure" risk.
Risk is often considered to be a set of triplets[21][17]
where:
These are the answers to the three fundamental questions asked by a risk analysis:
Risks expressed in this way can be shown in a table or risk register. They may be quantitative or qualitative, and can include positive as well as negative consequences.
The scenarios can be plotted in a consequence/likelihood matrix (or risk matrix). These typically divide consequences and likelihoods into 3 to 5 bands. Different scales can be used for different types of consequences (e.g. finance, safety, environment etc.), and can include positive as well as negative consequences.[48]
An updated version recommends the following general description of risk:[22]
where:
If all the consequences are expressed in the same units (or can be converted into a consistent loss function), the risk can be expressed as a probability density function describing the "uncertainty about outcome":
This can also be expressed as a cumulative distribution function (CDF) (or S curve).[48]
One way of highlighting the tail of this distribution is by showing the probability of exceeding given losses, known as a complementary cumulative distribution function, plotted on logarithmic scales. Examples include frequency-number (FN) diagrams, showing the annual frequency of exceeding given numbers of fatalities.[48]
A simple way of summarizing the size of the distribution's tail is the loss with a certain probability of exceedance, such as the Value at Risk.
Risk is often measured as the expected value of the loss. This combines the probabilities and consequences into a single value. See also expected utility. The simplest case is a binary possibility of Accident or No accident. The associated formula for calculating risk is then:
For example, if there is a probability of 0.01 of suffering an accident with a loss of $1000, then total risk is a loss of $10, the product of 0.01 and $1000.
In a situation with several possible accident scenarios, total risk is the sum of the risks for each scenario, provided that the outcomes are comparable:
In statistical decision theory, the risk function is defined as the expected value of a given loss function as a function of the decision rule used to make decisions in the face of uncertainty.
A disadvantage of defining risk as the product of impact and probability is that it presumes, unrealistically, that decision-makers are risk-neutral. A risk-neutral person's utility is proportional to the expected value of the payoff. For example, a risk-neutral person would consider 20% chance of winning $1 million exactly as desirable as getting a certain $200,000. However, most decision-makers are not actually risk-neutral and would not consider these equivalent choices.[17] Pascal's mugging is a philosophical thought experiment that demonstrates issues in assessing risk solely by the expected value of loss or return.
In finance, volatility is the degree of variation of a trading price over time, usually measured by the standard deviation of logarithmic returns. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. The risk is then:
The beta coefficient measures the volatility of an individual asset to overall market changes. This is the asset's contribution to systematic risk, which cannot be eliminated by portfolio diversification. It is the covariance between the asset's return ri and the market return rm, expressed as a fraction of the market variance:[51]
Risks of discrete events such as accidents are often measured as outcome frequencies, or expected rates of specific loss events per unit time. When small, frequencies are numerically similar to probabilities, but have dimensions of [1/time] and can sum to more than 1. Typical outcomes expressed this way include:[52]
Many risks to people are expressed as probabilities of death. Since mortality risks are very small, they are sometimes converted to micromorts, defined as a one in a million chance of death, and hence 1 million times higher than the probability of death. In many cases, the risk depends on the time of exposure, and so is expressed as a mortality rate. Health risks, which vary widely with age, may be expressed as a loss of life expectancy.
In health, the relative risk is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group.
An understanding that future events are uncertain and a particular concern about harmful ones may arise in anyone living in a community, experiencing seasons, hunting animals or growing crops. Most adults therefore have an intuitive understanding of risk. This may not be exclusive to humans.[54]
In ancient times, the dominant belief was in divinely determined fates, and attempts to influence the gods may be seen as early forms of risk management. Early uses of the word 'risk' coincided with an erosion of belief in divinely ordained fate.[55]
Risk perception is the subjective judgement that people make about the characteristics and severity of a risk. At its most basic, the perception of risk is an intuitive form of risk analysis.[56]
Intuitive understanding of risk differs in systematic ways from accident statistics. When making judgements about uncertain events, people rely on a few heuristic principles, which convert the task of estimating probabilities to simpler judgements. These heuristics are useful but suffer from systematic biases.[57]
The "availability heuristic" is the process of judging the probability of an event by the ease with which instances come to mind. In general, rare but dramatic causes of death are over-estimated while common unspectacular causes are under-estimated.[58]
An "availability cascade" is a self-reinforcing cycle in which public concern about relatively minor events is amplified by media coverage until the issue becomes politically important.[59]
Despite the difficulty of thinking statistically, people are typically over-confident in their judgements. They over-estimate their understanding of the world and under-estimate the role of chance.[60] Even experts are over-confident in their judgements.[61]
The "psychometric paradigm" assumes that risk is subjectively defined by individuals, influenced by factors that can be elicited by surveys.[62] People's perception of the risk from different hazards depends on three groups of factors:
Hazards with high perceived risk are in general seen as less acceptable and more in need of reduction.[63]
Cultural Theory views risk perception as a collective phenomenon by which different cultures select some risks for attention and ignore others, with the aim of maintaining their particular way of life.[64] Hence risk perception varies according to the preoccupations of the culture. The theory distinguishes variations known as "group" (the degree of binding to social groups) and "grid" (the degree of social regulation), leading to four world-views:[65]
Cultural Theory helps explain why it can be difficult for people with different world-views to agree about whether a hazard is acceptable, and why risk assessments may be more persuasive for some people (e.g. hierarchists) than others. However, there is little quantitative evidence that shows cultural biases are strongly predictive of risk perception.[66]
While risk assessment is often described as a logical, cognitive process, emotion also has a significant role in determining how people react to risks and make decisions about them.[67] Some argue that intuitive emotional reactions are the predominant method by which humans evaluate risk. A purely statistical approach to disasters lacks emotion and thus fails to convey the true meaning of disasters and fails to motivate proper action to prevent them.[68] This is consistent with psychometric research showing the importance of "dread" (an emotion) alongside more logical factors such as the number of people exposed.
The field of behavioural economics studies human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Recognizing and respecting the irrational influences on human decision making may improve naive risk assessments that presume rationality but in fact merely fuse many shared biases.
The "affect heuristic" proposes that judgements and decision-making about risks are guided, either consciously or unconsciously, by the positive and negative feelings associated with them.[69] This can explain why judgements about risks are often inversely correlated with judgements about benefits. Logically, risk and benefit are distinct entities, but it seems that both are linked to an individual's feeling about a hazard.[70]
Worry or anxiety is an emotional state that is stimulated by anticipation of a future negative outcome, or by uncertainty about future outcomes. It is therefore an obvious accompaniment to risk, and is initiated by many hazards and linked to increases in perceived risk. It may be a natural incentive for risk reduction. However, worry sometimes triggers behaviour that is irrelevant or even increases objective measurements of risk.[71]
Fear is a more intense emotional response to danger, which increases the perceived risk. Unlike anxiety, it appears to dampen efforts at risk minimisation, possibly because it provokes a feeling of helplessness.[72]
It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences,[73] threatening to kill a large number of people within a short period of time.[74] For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.[75][76]
Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks.[76] Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history.[77] Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people,[78] a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found[79] that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy.[80] Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.[81]
Outrage is a strong moral emotion, involving anger over an adverse event coupled with an attribution of blame towards someone perceived to have failed to do what they should have done to prevent it. Outrage is the consequence of an event, involving a strong belief that risk management has been inadequate. Looking forward, it may greatly increase the perceived risk from a hazard.[82]
One of the growing areas of focus in risk management is the field of decision theory where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?"
In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion[83][84] (preferring the status quo in case one becomes worse off).
Framing[85] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident.
For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.
All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest.
Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[86] while greater left prefrontal activity relates to local or focal processing.[87]
From the Theory of Leaky Modules[88] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[89] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.
A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking.[90] Thus, positive and negative feedback about past risk taking can affect future risk taking. In one experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.[91] People show risk aversion, so that they reject fair risky offers like a coin toss with an equal chance of winning and losing the same amount.[92] The expected premium for taking risks increases as the gambled amount increases.[93] Critically, people's intuitive response is often less risk-averse than their subsequent reflective response.[94]
In his seminal 1921 work Risk, Uncertainty, and Profit, Frank Knight established the distinction between risk and uncertainty.
... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.[100]
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable.
Another distinction between risk and uncertainty is proposed by Douglas Hubbard:[101][17]
In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.
Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment and analysis must be fundamentally different for the two types of risk.[102] Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and analysis is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and analysis are to be valid and reliable, according to Mandelbrot.
The terms risk attitude, appetite, and tolerance are often used similarly to describe an organisation's or individual's attitude towards risk-taking. One's attitude may be described as risk-averse, risk-neutral, or risk-seeking. Risk tolerance looks at acceptable/unacceptable deviations from what is expected.[clarification needed] Risk appetite looks at how much risk one is willing to accept. There can still be deviations that are within a risk appetite. For example, recent research finds that insured individuals are significantly likely to divest from risky asset holdings in response to a decline in health, controlling for variables such as income, age, and out-of-pocket medical expenses.[103]
Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with possibly higher gain. The possibility of getting no return on an investment is also known as the rate of ruin.
Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected.[104] By way of example, it has been observed that motorists drove faster when wearing seatbelts and closer to the vehicle in front when the vehicles were fitted with anti-lock brakes.
The experience of many people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining further independence or fully accessing the community, and that these services are often unnecessarily risk averse.[105] "People's autonomy used to be compromised by institution walls, now it's too often our risk management practices", according to John O'Brien.[106] Michael Fischer and Ewan Ferlie (2013) find that contradictions between formal risk controls and the role of subjective factors in human services (such as the role of emotions and ideology) can undermine service values, so producing tensions and even intractable and 'heated' conflict.[107]
Anthony Giddens and Ulrich Beck argued that whilst humans have always been subjected to a level of risk – such as natural disasters – these have usually been perceived as produced by non-human forces. Modern societies, however, are exposed to risks such as pollution, that are the result of the modernization process itself. Giddens defines these two types of risks as external risks and manufactured risks. The term Risk society was coined in the 1980s and its popularity during the 1990s was both as a consequence of its links to trends in thinking about wider modernity, and also to its links to popular discourse, in particular the growing environmental concerns during the period.
This is a list of books about risk issues:
Title | Author(s) | Year |
---|---|---|
Acceptable Risk | Baruch Fischhoff, Sarah Lichtenstein, Paul Slovic, Steven L. Derby, and Ralph Keeney | 1984 |
Against the Gods: The Remarkable Story of Risk | Peter L. Bernstein | 1996 |
At risk: Natural hazards, people's vulnerability and disasters | Piers Blaikie, Terry Cannon, Ian Davis, and Ben Wisner | 1994 |
Building Safer Communities. Risk Governance, Spatial Planning and Responses to Natural Hazards | Urbano Fra Paleo | 2009 |
Dangerous Earth: An introduction to geologic hazards | Barbara W. Murck, Brian J. Skinner, Stephen C. Porter | 1998 |
Disasters and Democracy | Rutherford H. Platt | 1999 |
Earth Shock: Hurricanes, volcanoes, earthquakes, tornadoes and other forces of nature | W. Andrew Robinson | 1993 |
Human System Response to Disaster: An Inventory of Sociological Findings | Thomas E. Drabek | 1986 |
Judgment Under Uncertainty: heuristics and biases | Daniel Kahneman, Paul Slovic, and Amos Tversky | 1982 |
Mapping Vulnerability: disasters, development, and people | Greg Bankoff, Georg Frerks, and Dorothea Hilhorst | 2004 |
Man and Society in Calamity: The Effects of War, Revolution, Famine, Pestilence upon Human Mind, Behavior, Social Organization and Cultural Life | Pitirim Sorokin | 1942 |
Mitigation of Hazardous Comets and Asteroids | Michael J.S. Belton, Thomas H. Morgan, Nalin H. Samarasinha, Donald K. Yeomans | 2005 |
Natural Disaster Hotspots: a global risk analysis | Maxx Dilley | 2005 |
Natural Hazard Mitigation: Recasting disaster policy and planning | David Godschalk, Timothy Beatley, Philip Berke, David Brower, and Edward J. Kaiser | 1999 |
Natural Hazards: Earth's processes as hazards, disasters, and catastrophes | Edward A. Keller, and Robert H. Blodgett | 2006 |
Normal Accidents. Living with high-risk technologies | Charles Perrow | 1984 |
Paying the Price: The status and role of insurance against natural disasters in the United States | Howard Kunreuther, and Richard J. Roth | 1998 |
Planning for Earthquakes: Risks, politics, and policy | Philip R. Berke, and Timothy Beatley | 1992 |
Practical Project Risk Management: The ATOM Methodology | David Hillson and Peter Simon | 2012 |
Reduction and Predictability of Natural Disasters | John B. Rundle, William Klein, Don L. Turcotte | 1996 |
Regions of Risk: A geographical introduction to disasters | Kenneth Hewitt | 1997 |
Risk Analysis: a quantitative guide | David Vose | 2008 |
Risk: An introduction (ISBN 978-0-415-49089-4) | Bernardus Ale | 2009 |
Risk and Culture: An essay on the selection of technical and environmental dangers | Mary Douglas, and Aaron Wildavsky | 1982 |
Socially Responsible Engineering: Justice in Risk Management (ISBN 978-0-471-78707-5) | Daniel A. Vallero, and P. Aarne Vesilind | 2006 |
Swimming with Crocodiles: The Culture of Extreme Drinking | Marjana Martinic and Fiona Measham (eds.) | 2008 |
The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA | Diane Vaughan | 1997 |
The Environment as Hazard | Ian Burton, Robert Kates, and Gilbert F. White | 1978 |
The Social Amplification of Risk | Nick Pidgeon, Roger E. Kasperson, and Paul Slovic | 2003 |
What is a Disaster? New answers to old questions | Ronald W. Perry, and Enrico Quarantelli | 2005 |
Floods: From Risk to Opportunity (IAHS Red Book Series) | Ali Chavoshian, and Kuniyoshi Takeuchi | 2013 |
The Risk Factor: Why Every Organization Needs Big Bets, Bold Characters, and the Occasional Spectacular Failure | Deborah Perry Piscione | 2014 |
ISO 31073:2022 — Risk management — Vocabulary — uncertainty.state, even partial, of deficiency of information related to understanding or knowledge
Note 1: In some cases, uncertainty can be related to the organization’s context as well as to its objectives.
Note 2: Uncertainty is the root source of risk, namely any kind of “deficiency of information” that matters in relation to objectives (and objectives, in turn, relate to all relevant interested parties’ needs and expectations).
ISO 31073:2022 — Risk management — Vocabulary — objective.result to be achieved
Note 1: An objective can be strategic, tactical or operational.
Note 2: Objectives can relate to different disciplines (such as financial, health and safety, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process).
Note 3: An objective can be expressed in other ways, e.g. as an intended outcome, a purpose, an operational criterion, as a management system objective, or by the use of other words with similar meaning (e.g. aim, goal, target).
ISO 31073:2022 — Risk management — Vocabulary — threat.potential source of danger, harm, or other undesirable outcome
Note 1: A threat is a negative situation in which loss is likely and over which one has relatively little control.
Note 2: A threat to one party may pose an opportunity to another.