UNDERSTANDING THE TYPE AND IMPACT OF THE DECISION TO BE MADE. Understanding the scope, nature and impact of the decisoin to be made. To be strategic, a decision must affect the organization in a way that contributes to the achievement of strategy.
DEVELOPING DECISION OPTIONS. Developing and narrowing options for addressing, resolving or mitigating the situation.
ANALYSING THE DECISION OPTIONS. Using analytical tools in two ways. First, to gain insight on the most likely cause(s for the situation being addressed by the decision to be made. Second, to evaluate and rank the options based on many factors including risk, time, resources, )interdependence, complexity, consequences, opportunity cost and more.
MAKING THE DECISION. From the results of the analysis of the decision options, select the best option.
EXECUTING THE DECISION. Implement the decision, which typically requires creating a plan involving assignment of responsibility and resources and a time line with metrics to track execution.
These powerful decision making principles and approaches can be used by organizational leaders to improve the effectiveness of their strategic decision making.
"Occam's razor (also written as Ockham's razor from William of Ockham (c. 1287 – 1347), and in Latin lex parsimoniae) is a principle of parsimony, economy, or succinctness used in problem-solving. It states that among competing hypotheses, the hypothesis with the fewest assumptions should be selected. Translated to common English: if there are multiple possible explanations for an event or result, the simplest is almost always correct.
The application of the principle often shifts the burden of proof in a discussion. The razor states that one should proceed to simpler theories until simplicity can be traded for greater explanatory power. The simplest available theory need not be most accurate."
Occam's Razor challenges us to stay with the simplest explanation of what happened to Flight 370 - but to be ready to incorporate new evidence that supports a less simple but more comprehensive explanation.
In the case of an organizational situation or SWOT analysis, when dissecting what the evidence suggests, stay with what's the simplest explanation - but be ready to change the explanation based on new information.
Degrees of freedom is the number of values which are involved in the final calculation of a statistic that are expected to vary...the independent part of data used in calculation. It is used to know the accuracy of the sample population used in research. The larger the degrees of freedom, the larger the possibility of the entire population [being] sampled accurately.
Relationship of degrees of freedom to parsimony, from Talkstats.com:
The principle is that we want our model to provide an accurate, but parsimonious description of the data. As the number of parameters in the model approaches the number of data points, the model will be better able to accurately fit any arbitrarily complex dataset, but the tradeoff is that the model is also less and less parsimonious. In the limit where there are just as many parameters as data points, all the model has really done is provide a verbatim redescription of the dataset, so that it's really not clear if we've learned anything. In practical terms, when the ratio of parameters to data points becomes too high, the generalization error of the model (i.e., the ability of the model to predict data points not found in the original data set from which parameters were estimated) suffers.
The concept of degrees of freedom further supports simplicity, lest we "over fit" the data we have to give us a predicted outcome that in reality is not valid given the evidence at hand. With the limited data set available on what happened to the plane while in the air, whatever "model" created to explain its fate needs to be simple.
For understanding plane disappearances and projecting an enterprise into the future for the purpose of developing strategies for the best outcome, complex interpretations built on limited information - that is, non-parsimonious models with few degrees of freedom using limited data - are more likely to fall apart as more information is introduced.
Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find [the] best explanation. Bayesianists identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely. Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable.
Bayesian probability...views likelihoods as probabilities rather than frequencies. Bayesians emphasize the importance of Bayes' theorem, a formal theorem that proves a rigid probabilistic relationship between the conditional and marginal probabilities of two random events. Bayes' theorem puts great emphasis on the prior probability of a given event -- for instance, in evaluating the probability that one patient has cancer based on a positive test result, one must be sure to take into account the background probability that any random person has cancer at all.
Bayesian inference instructs us to start with a base rate - the singlemost salient piece of known information that points to a probable answer. In the case of Flight 370, this table from PlaneCrashInfo.com summarizing 60 years of world-wide commercial airline accident data offers a great starting point for a base rate:
In the case of Flight 370, the applicable base rate might be that 50% of plane "events" in which passengers suffer fatalities - 542 or so of the 1,085 accidents summarized in the table - are the result of pilot error of one sort of the other. We would thus hypothesize a 0.5 probability that pilot error was the cause of the loss of Flight 370.
Note that we do not start with hijacking as the premise. The data show that "sabotage" caused only 9% of plane accidents with fatalities - about 98 incidents. In support of the infrequency of hijackings, whether resulting in fatalities or not, my count of Wikipedia's list shows only 122 hijackings of commercial airplanes since 1932, with the most in the 1970s -1990s. After 2001, the rate of hijacked airliners dropped dramatically, presumably due to increased airport and airplane security.
Then using Bayesian inference we would look for new evidence that we would use to modify the base rate. As Robert Hagstrom writes in The Warren Buffet Portfolio:
Bayesian analysis gives us a logical way to consider a set of outcomes of which all are possible but only one will actually occur. It is conceptually a simple procedure. We begin by assigning a probability to each of the outcomes on the basis of whatever evidence is then available. If additional evidence becomes available, the initial probability is revised to reflect the new information.
Bayes’s theorem thus gives us a mathematical procedure for updating our original beliefs (which had resulted from what he called a prior distribution of information) to produce a posterior distribution of information. In other words, prior probabilities combined with new information yield posterior probabilities and thus change our relevant odds.
One factor that appears not to have played a role in the fate of Flight 370 is weather. Bayesian analysis would have us revise our base rate 0.5 probability of pilot error upward to reflect the elimination of weather as a cause. The table on accidents with fatalities shows 12% attributed to weather. If we remove weather as a cause for Flight 370, our probability estimate that pilot error was the cause grows to 0.57. (The math, while unimportant in understanding the effect, is: 0.5 pilot error base rate / (1.00 all factors - 0.12 weather factor) = 0.5681818 revised probability of pilot error.)
Similarly, other evidence can be incorporated to modify the base rate prediction - perhaps even to justify moving to another cause as more probable.
A Black Swan is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurence after the fact, making it explainable and predictable.
. . .
How is it that some Black Swans are overblown in our minds when...we mainly neglect Black Swans? The answer is that there are two varieties of rare events: a) the narrated Black Swans, those that are present in the current discourse and that you are likely to hear about on television, and b) those nobody talks about since they escape models... I can safely say that it is entirely compatible with human nature that the incidences of Black Swans would be overestimated in the first case, but severly underestimated in the second one.
How do convoluted hijacking theories strike you? As seemingly "likely" Black Swans because the media have forwarded them? Here are two, combined: The plane was hijacked by a passenger who figured out how to use a mobile phone to hack into the plane’s flight management system to change the plane’s speed, altitude and direction. and who then flew it in the shadow of another airplane to escape radar detection.
While a Black Swan event indeed could have caused the loss of Flight 370, the likelihood and especially the predictability of a true Black Swan occurrence in conjunction with Flight 370's fate is extremely low - that's why such events are labeled "Black Swans."
SYSTEM 1 AND SYSTEM 2, AGAIN
A reason we don't typically think like Bayesians and use a base rate reweighed by new data as it emerges is also a reason why, however remote, we miss seeing the true possibility of Black Swan events. When presented with a situation, we are programmed to immediately "fit" the information at hand to our experience - magnified by what is most current to us and what instinctively seems "like" or relevant to the situation - whether it is really applicable or not.
As described in our last post, Often wrong but never in doubt, in his book Thinking, Fast and Slow, Nobel Prize winning psychologist Daniel Kahneman describes research that he and his late partner, Amos Tversky, conducted on judgment and decision-making. His premise is that our minds operate through two systems: System 1, which is automatic, intuitive and through associative memory instantly fits the sensory input we receive into the framework that fits best using our previous experience, and System 2, which is controlled and deliberate. Using System 2 to identify a base rate and modify it using new evidence demands attention, choice, concentration and effort to overcome System 1's intuitions and impulses.
What was the first thing you thought of when you heard Flight 370 was missing? My mind (System 1) jumped to terrorism. Odds are yours did, as well, given the years-long focus we have in the U.S. and elsewhere on terrorists commandeering airplanes. Yet, we have already demonstrated that the way we are programmed to think led us astray, at least from what Bayesian inference would have us consider as the most likely initial hypothesis.