Friday, September 7, 2007

Does The Quantitative Risk Analysis Suck?

To clarify the issue right off, I am considering information security risk analysis here. Although some of the aspects may be applicable to other areas of risk analysis and management, it is not a purpose of the article to cover them.

What are the practical benefits of risk analysis?

Most of us think that risk analysis is useful, but what do we mean by that? What are the practical benefits of spending your time on the task of risk analysis? I would outline these two major benefits:

  • Risk analysis results allow to justify the spending on the information security programme.
  • Risk analysis results help prioritize the information security efforts.
To understand whether some risk analysis method is adequate means to me assessment if it can accomplish these goals. Please also note, that good risk analysis method should not only comply with the goals, but also yield correct and reproducible results, which leads to the next section:

Can we trust the results?
Quantitative risk analysis results depend on the method used and correctness of the source data. Most risk analysis methodologies are based on the formula of:
ALE = SLE * ARO
(ALE - Annualized Loss Expectancy, SLE - Single Loss Expectancy, ARO - Annualized Rate of Occurrence). Therefore, to get correct (and reproducible) results we need to:
  • Correctly identify list of threats to the system (as overall risk is the sum of the individual risks). This step is required both for quantitative and qualitative risk analysis.
  • Correctly assess the probabilities (ARO) and impact (SLE) for each threat.
It does not matter whether you try to calculate SLE from other formulas, like:
SLE = AV * EF
(AV - Asset Value, EF - Exposure Factor). Now you have to deal with another unknown value that is hard to quantify - Exposure Factor.

We know several industries that are based upon risk assessment - insurance and financial institutions being among them. Most of them assess risks based on statistical data (several hundred of years of statistical data exist in insurance industry) and extensive theories. Do we have these in IT security? I don't know any of them and may only guess that most of the time these numbers are expert opinions. Keeping this in mind, can you get results that can be acted upon?

Various methodologies
Given unreliable source data it is obvious that it does not matter what risk assessment methodology you use
(here, I mean not the process, but the formulas). I think that basically what methodology can give you is who should be involved in the process, how it should be organized, what documents to produce, how to structure and simplify the process, etc.
Given the amount of effort that is required to perform the comprehensive risk analysis before starting the process one should decide whether the effort worth it.

As a summary, is the quantitative risk analysis worth close look? Definitely. Is there a mature methodology that can be used rights now and produces correct and reproducible results? Have not seen such yet.

Related
"
Get 50 practitioners in a room and you will have 50 different methodologies for assessing IT risk. The trouble is that nearly all of them will be subjective – the outcome of any risk assessment exercise is most likely to be ‘high’, medium’ or ‘low’. Even when it’s an apparently objective number -- 54,821, for example – you don’t learn all that much. Try going to your board and telling them that their IT risk is 54,821 and their eyes are likely to glaze over very quickly! Any attempt to calculate ‘annual loss expectancy’, although valiant, only results in trouble when the degree of variability is larger than the sum itself!"
IT Risk Assessment – Fact or Fiction? (Symantec)

2 comments:

Igor Gots said...

You are talking only about absolute value of the risk, but it also have sense as relavive value, when you, for example, put constant error in EF.

Sergey Soldatov said...
This comment has been removed by the author.