100% effect |
Our preference for reducing a small risk to zero over a greater reduction in a larger risk. (Also called "certainty bias", "certainty effect" and "100% effect.") |
a priori problem |
"The more remote the event, the less we can get empirical data (assuming generously that the future will resemble the past) and the more we need to rely on theory. Consider that the frequency of rare events cannot be estimated from empirical observation for the very reason that they are rare. We thus need a prior model representation for that; the rarer the event, the higher the error in estimation from standard inductive methods (say, frequency sampling from counting past occurences), hence the higher dependence on an a priori representation that extrapolates into the space of low probability events (which necessarily are not seen often)." - Nassim Nicholas Taleb |
Above average effect |
Those receiving performance reviews invariably believe they are above average—and defensively resist being told that they aren't. This "above average" effect has been widely replicated in numerous studies considering everything from sense of humor to appearance. (Also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect," "inability to self assess" and "Lake Wobegon effect") |
Accident fallacy |
The informal fallacy of accident (also called destroying the exception or a dicto simpliciter ad dictum secundum quid) is a deductively valid but unsound argument occurring in statistical syllogisms (an argument based on a generalization) when an exception to a rule of thumb is ignored. |
Action-oriented bias |
This bias drives us to take action without considering all the potential ramifications of those actions. It causes us to overestimate the odds of positive outcomes while underestimating the chances of negative ones. With this bias, we put too much faith in our ability to produce the desired outcomes, while taking too much credit for past successes. And we discount or ignore possible competitive responses during the planning process. At the heart of this bias is the feeling that “if I do this, I will be able to control events and reach my goal”. Action-Oriented Bias appears when we feel pressure to take action while we are optimistic about the future so we dismiss the possibility of negative events. |
Actor–observer bias |
The tendency for explanations of other individuals' behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation (see also fundamental attribution error), and for explanations of one's own behaviors to do the opposite (that is, to overemphasize the influence of our situation and underemphasize the influence of our own personality). |
Ad hoc rescue |
Psychologically, it is understandable that you would try to rescue a cherished belief from trouble. When faced with conflicting data, you are likely to mention how the conflict will disappear if some new assumption is taken into account. However, if there is no good reason to accept this saving assumption other than that it works to save your cherished belief, your rescue is an ad hoc rescue. |
Adaptation level |
A mental reference point used by System 1 for evaluation - the status quo or the outcome you expect or the outcome to which you feel entitled. Outcomes that are better than reference points are gains. Below reference point are losses. |
Affirming the consequent |
If you have enough evidence to affirm the consequent of a conditional and then suppose that as a result you have sufficient reason for affirming the antecedent, your reasoning contains the fallacy of affirming the consequent. |
Ambiguity effect |
The tendency to avoid options for which missing information makes the probability seem "unknown." |
Anchoring effect |
This is the tendency we have to compare and contrast only a limited set of items. It's called the anchoring effect because we tend to fixate on a value or number that in turn gets compared to everything else. The classic example is an item at the store that's on sale; we tend to see (and value) the difference in price, but not the overall price itself. |
Anecdotal evidence |
This is fallacious generalizing on the basis of a some story that provides an inadequate sample. If you discount evidence arrived at by systematic search or by testing in favor of a few firsthand stories, then your reasoning contains the fallacy of overemphasizing anecdotal evidence. |
Anecdotes before data |
Why anecdotes speak louder than data: Anecdotes are stories. They generate empathy and thus trigger emotional reactions. Emotions cause us to process the data and the feelings and trigger the memory centers in our brain. Processing emotions is more important for decision-making than processing data, according to Susan Weinschenk in her book Neuro Web Design: What makes them click? She explains, "Most mental processing occurs unconsciously. It’s easy to forget that information is coming in and being processed from many sources. It’s easy to forget that people are processing emotions too. If you want people to act on the data, then you need to couple it with emotional data." |
Appeal to ignorance |
The fallacy of appeal to ignorance comes in two forms: (1) Not knowing that a certain statement is true is taken to be a proof that it is false. (2) Not knowing that a statement is false is taken to be a proof that it is true. The fallacy occurs in cases where absence of evidence is not good enough evidence of absence. The fallacy uses an unjustified attempt to shift the burden of proof. The fallacy is also called "Argument from Ignorance." |
Appeal to money |
The fallacy of appeal to money uses the error of supposing that, if something costs a great deal of money, then it must be better, or supposing that if someone has a great deal of money, then they're a better person in some way unrelated to having a great deal of money. Similarly it's a mistake to suppose that if something is cheap it must be of inferior quality, or to suppose that if someone is poor financially then they're poor at something unrelated to having money. |
Authority |
Using famous experiments such as the Milgram obedience studies, Cialdini discusses the use of ‘appeals to authority’ as a method of persuasion, in contexts ranging from celebrity endorsement of products (e.g. the “I’m not a doctor, butI play one on TV” line (TV Tropes, 2011) to the use of clothes (e.g. a security guard’s uniform) to trigger "compliance’" with requests. |
Availability cascade |
A self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true"). |
Availability heuristic |
When we are trying to determine how likely something is, we often base such estimates on how easily we can remember similar events happening in the past. Belle Beth Smith: "We believe our memories more than facts. Our memories are highly fallible and plastic. And yet, we tend to subconsciously favor them over objective facts." |
Backfire effect |
People can react to evidence that disconfirms what they believe by strengthening their beliefs. "It seems that unless we are extremely careful, by trying to convince the most hardened cynics of the evidence we may end up doing more harm than good." - Simon Oxenham.
|
Bad news avoidance |
We cannot assume that individuals will absorb, or even pay attention to information that might prove valuable in present or future decision making. Instead, we appear to treat bad news as something to be avoided. Apparently, compared to “bad news,” “no news is good news”—a phenomenon that also may clarify the historical success of “Yes Men!” people are not always seekers of information. |
Bald Man Fallacy |
If we improperly reject a vague claim because it is not as precise as we would like, then we are using the line-drawing fallacy. Being vague is not being hopelessly vague. Also called the bald man fallacy, the fallacy of the heap and the sorites fallacy. |
Bandwagon Effect |
People, often unconsciously, do something or support an action primarily because other people are doing it or support it, regardless of their own beliefs or the underlying evidence. |
Base-rate neglect |
Not assessing evidence by looking for the base rate for the seeming opportunity or threat: the best available information on its likelihood. Daniel Kahneman: "When you make explicit predictions, like will somebody whose a young professor eventually get tenure or not, remind your self that the base rate of tenure is very important in that story." |
Bayesian conservatism |
The tendency to revise one's belief insufficiently when presented with new evidence. |
Be fair….in the middle heuristic |
Making decisions that "split the baby" and are sub optimal in te effort to be fair to both sides and |
Begging the Question |
A form of circular reasoning in which a conclusion is derived from premises that presuppose the conclusion. Normally, the point of good reasoning is to start out at one place and end up somewhere new, namely having reached the goal of increasing the degree of reasonable belief in the conclusion. The point is to make progress, but in cases of begging the question there is no progress. |
Belief bias |
people’s lay theories of what makes them happy or unhappy, including lay theories about contrast effects, adaptation and certainty. These lay theories are usually learned in situations where they are valid, but are then over-generalized to situations where they do not hold. |
Belief bias |
An effect where someone's evaluation of the logical strength of an argument is biased by the believability of the conclusion.[17] |
Bias blind spot |
The tendency to see oneself as less biased than other people, or to be able to identify more cognitive biases in others than in oneself.[18] |
Biased Generalizing |
Generalizing from a biased sample. Using an unrepresentative sample and overestimating the strength of an argument based on that sample. |
Biased statistics |
A hasty generalization is a fallacy of jumping to conclusions in which the conclusion is a generalization. See also Biased Statistics, unrepresentative generaliztion, faulty generalization. |
Bizarreness effect |
Bizarre material is better remembered than common material. |
Black Swan blindness |
The underestimation of the role of the Black Swan - an outlier event that is outside the realm of regular expectations because nothing in the past can convincingly point to its possibility...that carries an extreme impact...[and for which] human nature makes us concoct explanations for its occurrence after the fate, making it explainable and predictable - and occasional overestimation of a specific one. |
Butterfly effect |
See explosivbe forecasting difficulty |
Buyer's Stockholm Syndrome |
See cognitive dissonance avoidance CHOICE-SUPPORTIVE BIAS (also called "post purchase rationalization," "rosy retrospection", and "Buyer's Stockholm syndrome") |
Bystander apathy |
A series of experiments by Latane and Darley (1969) uncovered the bystander effect, also known as bystander apathy, in which larger numbers of people are less likely to act in emergencies - not only individually, but collectively. 75% of subjects alone in a room, noticing smoke entering from under a door, left to report it. When three naive subjects were present, the smoke was reported only 38% of the time. A naive subject in the presence of two confederates who purposely ignored the smoke, even when the room became hazy, left to report the smoke only 10% of the time. A college student apparently having an epileptic seizure was helped 85% of the time by a single bystander and 31% of the time by five bystanders. The bystander effect is usually explained as resulting from diffusion of responsibility and pluralistic ignorance. Being part of a group reduces individual responsibility. Everyone hopes that someone else will handle the problem instead, and this reduces the individual pressure to the point that no one does anything. Support for this hypothesis is adduced from manipulations in which subjects believe that the victim is especially dependent on them; this reduces the bystander effect or negates it entirely. |
Categorization |
As you walk around the world you gather examples, and every natural category, as you learn, you acquire examples. The act of conversation or the act of diagnosis ultimately amounts to matching the incoming information with some prior example in memory that has many of the characteristics, uniquely, of the new stimulus. |
Certainty bias |
Preference for reducing a small risk to zero over a greater reduction in a larger risk. (also called "certainty bias", "certainty effect" and "100% effect") |
Change bias |
After an investment of effort in producing change, remembering one's past performance as more difficult than it actually was[82][unreliable source?] |
Change blindness |
a surprising perceptual phenomenon that occurs when a change in a visual stimulus is introduced and the observer does not notice it. For example, observers often fail to notice major differences introduced into an image while it flickers off and on again. People's poor ability to detect changes has been argued to reflect fundamental limitations of human attention. |
Cheerleader effect |
The tendency for people to appear more attractive in a group than in isolation.[19 |
Chemical arousal |
leads to a higher rate of perception errors and bad decisions. |
Choice blindness |
people "...often fail to notice glaring mismatches between their intentions and outcomes, while nevertheless being prepared to offer introspectively derived reasons for why they chose the way they did." In other words, not only do we frequently fail to notice when we are presented with something different from what we really want; we will also come up with reasons to defend this "choice." Choice blindness is a form of inattentional blindness, a phenomenon in which people fail to notice unexpected stimuli in the world around them. |
Choice overload |
See paradox of choice |
Choice-supportive bias |
The tendency to remember one's choices as better than they actually were.[20] (also called "post purchase rationalization," "rosy retrospection", and "Buyer's Stockholm syndrome") |
Circular reasoning |
Circular reasoning occurs when the reasoner begins with what he or she is trying to end up with. The most well known examples are cases of the fallacy of begging the question. However, if the circle is very much larger, including a wide variety of claims and a large set of related concepts, then the circular reasoning can be informative and so is not considered to be fallacious. For example, a dictionary contains a large circle of definitions that use words which are defined in terms of other words that are also defined in the dictionary. Because the dictionary is so informative, it is not considered as a whole to be fallacious. However, a small circle of definitions is considered to be fallacious. |
Clustering illusion |
The tendency to overestimate the importance of small runs, streaks, or clusters in large samples of random data (that is, seeing phantom patterns).[10] |
Cognitive dissonance avoidance |
Cognitive dissonance is the discomfort we get when we’re trying to hold onto two competing ideas or theories. For instance, if we think of ourselves as being nice to strangers, but then we see someone fall over and don’t stop to help them, we would then have conflicting views about ourselves: we are nice to strangers, but we weren’t nice to the stranger who fell over. This creates so much discomfort that we have to change our thinking to match our actions—i.e. we start thinking of ourselves as someone who is not nice to strangers, since that’s what our actions proved. So in the case of our impulse shopping trip, we would need to rationalize the purchases until we truly believe we needed to buy those things, so that our thoughts about ourselves line up with our actions (making the purchases). |
Commitment heuristic |
The commitment heuristic is the tendency to believe that a behavior is correct to the extent that it is consistent with a prior commitment we have made. This heuristic is deeply rooted in our desire to be and appear consistent with our words, beliefs, attitudes and deeds (Aronson, 1999). Public image aside, the heuristic works because it provides us a shortcut through complexity. Rather than sift through all the relevant information with each new development, we merely make a decision that is consistent with an earlier one. Given the ubiquity (many say the necessity) of the commitment heuristic in modern life, it’s no surprise that our unconscious reliance on it frequently makes us unwitting shills in countless retail, charity nd political campaigns (Cialdini, 2001). |
Common Cause |
This fallacy occurs during causal reasoning when a causal connection between two kinds of events is claimed when evidence is available indicating that both are the effect of a common cause. |
Concorde fallacy |
(see, e.g., Dawkins & Carlisle, 1976), aptly named for the supersonic airplane. The plane's dim financial prospects were known long before the plane was completed, but the two governments financing the project decided to continue anyway on the gounds that they had already invested a lot of money. In short, they had "too much invested to quit" (Teger, 1980). |
Confirmation bias |
we subconsciously begin to ignore or dismiss anything that threatens our world views, since we surround ourselves with people and information that confirm what we already think |
Conflicts create productive change trap |
A direct implication of dialectical thought is the idea that you can create change by creating conflict and that conflict will produce beneficial results. |
Confusing an explanation with an excuse |
Treating someone's explanation of a fact as if it were a justification of the fact. Explaining a crime should not be confused with excusing the crime, but it too often is. |
Congruence bias |
The tendency to test hypotheses exclusively through direct testing, instead of testing possible alternative hypotheses.[10] |
Conjunction fallacy |
The probability of a conjunction is never greater than the probability of its conjuncts. In other words, the probability of two things being true can never be greater than the probability of one of them being true, since in order for both to be true, each must be true. However, when people are asked to compare the probabilities of a conjunction and one of its conjuncts, they sometimes judge that the conjunction is more likely than one of its conjuncts. This seems to happen when the conjunction suggests a scenario that is more easily imagined than the conjunct alone. |
Conservatism (Bayesian) |
The tendency to revise one's belief insufficiently when presented with new evidence.[23][26][27] |
Conservatism or regressive bias |
Tendency to remember high values and high likelihoods/probabilities/frequencies lower than they actually were and low ones higher than they actually were. Based on the evidence, memories are not extreme enough[24][25] |
Consistency bias |
Incorrectly remembering one's past attitudes and behaviour as resembling present attitudes and behaviour.[83] |
Context effect |
That cognition and memory are dependent on context, such that out-of-context memories are more difficult to retrieve than in-context memories (e.g., recall time and accuracy for a work-related memory will be lower at home, and vice versa) |
Contrast effect |
The enhancement or reduction of a certain perception's stimuli when compared with a recently observed, contrasting object.[28] |
Converse accident |
If we reason by paying too much attention to exceptions to the rule, and generalize on the exceptions, our reasoning contains this fallacy. This fallacy is the converse of the accident fallacy. It is a kind of Hasty Generalization, by generalizing too quickly from a peculiar case. |
Cryptomnesia |
A form of misattribution where a memory is mistaken for imagination, because there is no subjective experience of it being a memory.[82] |
Cumulative advantage |
See the Matthew effect |
Current Moment Bias |
We humans have a really hard time imagining ourselves in the future and altering our behaviors and expectations accordingly. Most of us would rather experience pleasure in the current moment, while leaving the pain for later. This is a bias that is of particular concern to economists (i.e. our unwillingness to not overspend and save money) and health practitioners. Indeed, a 1998 study showed that, when making food choices for the coming week, 74% of participants chose fruit. But when the food choice was for the current day, 70% chose chocolate. (also called "present bias" and "hyperbolic discounting") |
Curse of knowledge |
a cognitive bias that leads better-informed parties to find it extremely difficult to think about problems from the perspective of lesser-informed parties. The effect was first described in print by the economists Colin Camerer, George Loewenstein and Martin Weber, though they give original credit for suggesting the term to Robin Hogarth. |
Decision fatigue |
Our brain gets tired just like a muscle. When our brain is exhausted, we tend to make worse decisions. |
Decision paralysis |
See paradox of choice |
Decoy effects |
Certain choices can also be deliberate decoys, not intended to be chosen, but solely included to make the other choices look more attractive in comparison. |
Deese–Roediger– McDermott paradigm |
A participant listening to an experimenter read lists of thematically related words (e.g. table, couch, lamp, desk); then after some period of time the experimenter will ask if a word was presented in the list. Participants often report that related but non-presented words (e.g. chair) were included in the encoding series, essentially suggesting that they ‘heard’ the experimenter say these non-presented words (or critical lures). Incorrect ‘yes’ responses to critical lures, often referred to as false memories, are remarkably high under standard DRM conditions. |
Default option |
“[I]f, for a given choice, there is a default option—an option that will obtain if the chooser does nothing—then we can expect a large number of people to end up with that option, whether or not it is good for them.” Richard Thaler and Cass Sunstein, Nudge, 2008, p.84. There is an asymmetry between the ‘present state’ in any situation, and a change in behaviour: the present state, the |
Defensive attribution hypothesis |
Attributing more blame to a harm-doer as the outcome becomes more severe or as personal or situational similarity to the victim increases. |
Denomination effect |
The tendency to spend more money when it is denominated in small amounts (e.g. coins) rather than large amounts (e.g. bills).[30] |
Denominator neglect |
Kahneman/Paul Slovic: "…explains why different ways of communicating risks vary so much in their effects." The superficial processing characterist of System 1. People amek remarkably foolish choices when They chose between drawing from urn A which contains 10 marbles, of which 1 is red, or from urn B which contains 100 marbles, of which 8 are red. A 10% probability, B 8% probability: 30-40% of students choose B. "..fluency, vividness and the ease of imagining contribute to decision weights..." "Vivid imagery contributes to denominator neglect." |
Denying the antecedent |
You are using this fallacy if you deny the antecedent of a conditional and then suppose that doing so is a sufficient reason for denying the consequent. This formal fallacy is often mistaken for modus tollens, a valid form of argument using the conditional. A conditional is an if-then statement; the if-part is the antecedent, and the then-part is the consequent. |
Destroying the exception |
The informal fallacy of accident (also called destroying the exception or a dicto simpliciter ad dictum secundum quid) is a deductively valid but unsound argument occurring in statistical syllogisms (an argument based on a generalization) when an exception to a rule of thumb is ignored. |
Dicto simpliciter ad dictum |
The informal fallacy of accident (also called destroying the exception or a dicto simpliciter ad dictum secundum quid) is a deductively valid but unsound argument occurring in statistical syllogisms (an argument based on a generalization) when an exception to a rule of thumb is ignored. |
Diminishing sensitivity |
Applies to both sensory dimensions and the evaluation of changes in wealth. Weak light turned on has a large effect in a dark room. Same increment of light may be undetectable in a brightly illuminated room. Subjevtive difference between $900 and $1000 is much smaller than between $100 and $200. System 1 characteristic. |
Disconfirmation bias |
When people subject disagreeable evidence to more scrutiny than agreeable evidence, this is known as motivated skepticism or disconfirmation bias. Disconfirmation bias is especially destructive for two reasons: First, two biased reasoners considering the same stream of evidence can shift their beliefs in opposite directions - both sides selectively accepting only favorable evidence. Gathering more evidence may not bring biased reasoners to agreement. Second, people who are more skilled skeptics - who know a larger litany of logical flaws - but apply that skill selectively, may change their minds more slowly than unskilled reasoners. |
Distinction bias |
occurs because predictors and experiencers are in different evaluation modes |
Distinction bias |
The tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately. |
Dunning–Kruger Effect |
An effect in which incompetent people fail to realise they are incompetent because they lack the skill to distinguish between competence and incompetence. Actual competence may weaken self-confidence, as competent individuals may falsely assume that others have an equivalent understanding. (Also called "inability to self assess," "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect" and "Lake Wobegon effect") |
Duration neglect |
The neglect of the duration of an episode in determining its value |
Egocentric bias |
Occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would credit them. |
Egocentric bias |
Recalling the past in a self-serving manner, e.g., remembering one's exam grades as being better than they were, or remembering a caught fish as bigger than it really was. |
Ellsburg paradox |
The Ellsberg paradox is a paradox in decision theory in which people's choices violate the postulates of subjective expected utility. It is generally taken to be evidence for ambiguity aversion. The paradox was popularized by Daniel Ellsberg, although a version of it was noted considerably earlier by John Maynard Keynes.[ The basic idea is that people overwhelmingly prefer taking on risk in situations where they know specific odds rather than an alternative risk scenario in which the odds are completely ambiguous—they will always choose a known probability of winning over an unknown probability of winning even if the known probability is low and the unknown probability could be a guarantee of winning. That is, given a choice of risks to take (such as bets), people "prefer the devil they know" rather than assuming a risk where odds are difficult or impossible to calculate. |
Emotion |
Most people believe that the choices they make result from a rational analysis of available alternatives. In reality, however, emotions greatly influence and, in many cases, even determine our decisions. |
Empathy gap |
The tendency to underestimate the influence or strength of feelings, in either oneself or others. |
Endowment effect |
The fact that people often demand much more to give up an object than they would be willing to pay to acquire it.[32] |
Epistemic arrogance |
Our hubris concerning the limits of our knowledge. Our knowledge does grow, but it is threatened by greater increaes in confidence, which make our increase in knowledge at the same time an increase in confusion, ignorance and conceit. People cannot guess a statistic (population of western Ukraine, number of lovers that Catherine the Great had, etc.) within a 2% error rate [the range within a 98% certainty] - average 15% - 30%! We mostly underestimate the error rate required to include the actual result. We overestimate what we know and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown. Literally by prediction about the future is likely to be infected by it. |
Epistemic opacity |
Randomness is the result of incomplete information at some layer. It is functionally indistinguishable from "ture" or "physical" randomness. |
Escalation of Commitment |
Describes the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the cost, starting today, of continuing the decision outweighs the expected benefit. |
Essentialism |
Categorizing people and things according to their essential nature, in spite of variations.[dubious – discuss][33] |
Exaggerated expectation |
Based on the estimates, real-world evidence turns out to be less extreme than our expectations (conditionally inverse of the conservatism bias).[unreliable source?][23][34] |
Exclusive alternatives trap |
Traditional logic tends to make us think in terms of either-or analysis. Many situations demand that we juggle more than two alternatives. |
Experimenter's or expectation bias |
The tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations. |
Expert problem |
Not recognizing the difference between "true experts" and those who are not. Those that deal with the the future and base their skills on the non-repeatable past (except weather and short-term physical processes) have an expert problem. They are close to a fraud, performing no better than a computer, blinded by intuition. |
Explosive forecasting difficulty |
Not recognizing the limits that nonlinearities place on forecasting. Need for an increasing amount of precision in assumptions as you forecast into the future. The world is a dynamical system.... Multiplicative difficulty leading to the need for greater and greater precision in assumptions. "The butterfly effect." |
Extrinsic incentives bias |
An exception to the fundamental attribution error, when people view others as having (situational) extrinsic motivations and (dispositional) intrinsic motivations for oneself |
Fading affect bias |
A bias in which the emotion associated with unpleasant memories fades more quickly than the emotion associated with positive events.[84] |
Fallacy of origins |
See genetic fallacy. also known as fallacy of origins, fallacy of virtue, is a fallacy of irrelevance where a conclusion is suggested based solely on something or someone's origin rather than its current meaning or context. |
Fallacy of silent evidence |
Looking at history, we do not see the full story, only the rosier parts of the process. |
Fallacy of the heap |
If we improperly reject a vague claim because it is not as precise as we'd like, then we are using the line-drawing fallacy. Being vague is not being hopelessly vague. Also called the Bald Man Fallacy, the Fallacy of the Heap and the Sorites Fallacy. |
Fallacy of virtues |
See genetic fallacy. also known as fallacy of origins, fallacy of virtue, is a fallacy of irrelevance where a conclusion is suggested based solely on something or someone's origin rather than its current meaning or context. |
False Analogy |
The problem is that the items in the analogy are too dissimilar. When reasoning by analogy, the fallacy occurs when the analogy is irrelevant or very weak or when there is a more relevant disanalogy. See also Faulty Comparison. |
False causality |
One of the most promounced kinds of biases is if something happens right after something- they co-occur in time - we tend to think that they happened because of that something. Sick, see doc when feeling worse and given treatment, chances are right afterwards you will be feeling a little bit better and you are going to think that that treatment is effective. (also called the "false cause fallacy," the "post-hoc fallacy" or "Post hoc, ergo propter hoc," “after this, therefore because of this”) |
False consensus effect |
We see more consensus for our beliefs than is actually the case. Some of it is due to...biased sampling. Some it is due to how we resolve the ambiguity inherent of this kind of issues that come up or the questions that we're asked [we may define the words used in the question differently]. Most people recognize that not everyone agrees with them, so they make allowance for that. "Ok, I think this. Not everyone's going to think that." What it's harder to make allowance for is that "I'm making a judgment about this thing. You may be making a judgment about a very different thing, even though we call it the same name." |
False dilemma |
A reasoner who unfairly presents too few choices and then implies that a choice must be made among this short menu of choices is using the false dilemma fallacy, as does the person who accepts this faulty reasoning. |
False memory reconstruction |
Confidence is not a good indicator that your memory is accurate because false memories can be expressed with a lot of confience, they can be expressed witha lot of detail, they can be expressed with a lot of emotion, so they have the same characteristics as true memories and can mislead people into thinking that something is real when it's not. |
Familiarity heuristic |
the tendency to believe that our behavior is correct to the extent that we have have done it before. In essence, this heuristic amounts to a kind of mental habit where our past actions are proof that a particular behavior is appropriate. For example, when we drive to work each day, we generally don’t review the pros and cons of all possible routes; we simply take the most familiar one. The familiarity heuristic is especially powerful because it is simple and it frees us from having to go through the same time-consuming decision processes again and again, only to arrive at what is usually the same conclusion. People unconsciously use this heuristic dozens of times each day, so it’s no surprise that it is routinely exploited in the advertising and retail industries (Underhill, 1999) |
Faulty Comparison |
If you try to make a point about something by comparison, and if you do so by comparing it with the wrong thing, then your reasoning uses the fallacy of faulty comparison or the fallacy ofquestionable analogy. |
Faulty generalization |
A fallacy produced by some error in the process of generalizing. See Hasty Generalization or Unrepresentative Generalization for examples. |
Focalism |
predictors pay too much attention to the central event and overlook context events that will moderate the central event’s impact |
Focusing effect |
The tendency to place too much importance on one aspect of an event.[36] |
Fooled by randomness |
Relying on the notion of "average" in the presence of variance. The general confusion between luck and determinism. |
Foot-in-the-door technique |
Freedman and Fraser (1966) demonstrated that a person who first complies with a small request is more likely to comply with a larger request later. When the large and small requests are for related activities that differ in their cost to the complying person, the phenomenon is called the foot-in-the-door technique. An example would be first having people sign a petition to encourage legislators to support safe driving laws. Later, the petition signers are asked to display on their lawn a large sign that reads, “Drive safely.” Most explanations of the foot-in-the-door technique are couched in terms of self-perception theory (Bern, 1967). A person observes himself or herself complying with a request to support good driving or a charitable |
Force can do it trap |
In the grips of this trap we think in terms only of forcing a solution on the situation. |
Forer effect or Barnum effect |
The observation that individuals will give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. This effect can provide a partial explanation for the widespread acceptance of some beliefs and practices, such as astrology, fortune telling, graphology, and some types of personality tests. |
Forever changeless trap |
In this trap we think of the current condition as being the same forever. |
Framing |
can be seen as covering a huge range of practices and techniques: anything which shifts people’s reference points (including anchoring and decoy effects) or presents a situation or choices differently (Thaler and Sunstein’s choice architecture (2008)). From the design for behaviour change perspective, it is probably worth considering reframing as an approach relevant to any situation where it is possible to elicit a different point of view or behaviour by restructuring the way information is presented, particularly if the reframing can take into account loss aversion. |
Frequency illusion |
See observational selection bias |
Functional fixedness |
Limits a person to using an object only in the way it is traditionally used. |
Fundamental attribution error |
people's tendency to place an undue emphasis on internal characteristics to explain someone else's behavior in a given situation, rather than considering external factors. |
fundamental cognitive error |
that we don't recognize that we've made an interpretation and that there are a million other ways that it could have been interpreted. |
Future blindness |
Our natural inability to take into account the properties of the future - like autism, which prevents one from taking into account the minds of others. |
Gambler’s fallacy |
You flip a coin, over and over, each time guessing whether it will turn up heads or tails. You have a 50/50 chance of being right each time. Now suppose you’ve flipped the coin five times already and it’s turned up heads every time. Surely, surely, the next one will be tails, right? The chances of it being tails must be higher now, right? Well, no. The chances of tails turning up are 50/50. Every time. Even if you turned up heads the last twenty times. The odds don’t change. The gambler’s fallacy is a glitch in our thinking—once again, we’re proven to be illogical creatures. The problem occurs when we place too much weight on past events and confuse our memory with how the world actually works, believing that they will have an effect on future outcomes (or, in the case of Heads or Tails, any weight, since past events make absolutely no difference to the odds). |
Generation effect (Self-generation effect) |
That self-generated information is remembered best. For instance, people are better able to recall memories of statements that they have generated than similar statements generated by others. |
Genetic fallacy |
also known as fallacy of origins, fallacy of virtue, is a fallacy of irrelevance where a conclusion is suggested based solely on something or someone's origin rather than its current meaning or context. |
Group attribution error |
The biased belief that the characteristics of an individual group member are reflective of the group as a whole or the tendency to assume that group decision outcomes reflect the preferences of group members, even when information is available that clearly suggests otherwise. |
Group polarization effect |
Some choice dilemmas consistently produce caution shifts. Such effects commonly occur when the risky option involves the possibility of ruining one’s life, severely harming others, or dying. When people are initially conservative, finding the risky option acceptable only if its odds of success are very high, then after discussion they become even more conservative, becoming even less willing to accept w probability that the risky option will fail. The phenomenon has been more aptly named the “group polarization” effect because the shift can be either toward risk or toward caution, generally reflecting the initial biases of the individual discussants. Group discussion has been described by one European researcher as acting like a developer on exposed film: “it brings out the picture, but the outcome is predetermined. ” Group polarization is not due to greater powers of leadership in those who advocate the extremes, or due to pressures to conform. Rather, the polarization effect is caused by social comparison and the number of arguments pro or con presented during discussion. First, befor e h earing the Positions of others, people generally believe that they have answered in a more desirable fashion than most others would. Since not everyone can be better than average, group discussion generally provides people with surprising information as to what constitutes a socially ideal level of risk or caution, causing an impetus to polarize in the favored direction. Second, the initial individual leanings toward risk or caution generally reflect the pool of arguments that will arise during discussion. Given that with most problems not everyone will have thought of all of these arguments, the difference between the number of arguments in favor or against risk is likely to widen during discussion, favoring the initially preferred pole. People are strongly swayed by new arguments in favor of a particular position. As one would expect, little movement is caused by discussion of highly familiar issues. The Psychology of Risk: A Brief Primer by Paul Andreassen, The Jerome Levy Economics Institute of Bard College Working Paper No. 87 March 1993 |
Group think |
A reasoner uses the group think fallacy if he or she substitutes pride of membership in the group for reasons to support the group's policy. If that's what our group thinks, then that's good enough for me. It's what I think, too. "Blind" patriotism is a rather nasty version of the fallacy. |
Guilt by association |
Guilt by association is a version of the ad hominem fallacy in which a person is said to be guilty of error because of the group he or she associates with. The fallacy occurs when we unfairly try to change the issue to be about the speaker's circumstances rather than about the speaker's actual argument. Also called "Ad Hominem, Circumstantial." |
Halo effect |
The tendency for a person's positive or negative traits to "spill over" from one personality area to another in others' perceptions of them (see also physical attractiveness stereotype).[74] |
Hard–easy effect |
Based on a specific level of task difficulty, the confidence in judgments is too conservative and not extreme enough[23][39][40][41] |
Hasty generalization |
A hasty generalization is a fallacy of jumping to conclusions in which the conclusion is a generalization. See also Biased Statistics. |
High stress |
leads to a higher rate of perception errors and bad decisions. |
Hindsight bias |
one version of a broader phenomenon known as the curse of knowledge - mental trap that causes us to feel more confident in our knowledge of events after they’ve happened. It prevents us from learning: If you feel like you knew it all along, it means you won’t stop to examine why something really happened. Can also make us overconfident in our own judgment. If you feel like you’re usually right, why would you be wrong next time? |
Hostile media effect |
The tendency to see a media report as being biased, owing to one's own strong partisan views. |
Hot-hand fallacy |
The "hot-hand fallacy" (also known as the "hot hand phenomenon" or "hot hand") is the fallacious belief that a person who has experienced success has a greater chance of further success in additional attempts. |
Humor effect |
That humorous items are more easily remembered than non-humorous ones, which might be explained by the distinctiveness of humor, the increased cognitive processing time to understand the humor, or the emotional arousal caused by the humor. |
Hyperbolic discounting |
Discounting is the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs. Hyperbolic discounting leads to choices that are inconsistent over time – people make choices today that their future selves would prefer not to have made, despite using the same reasoning. Also known as current moment bias, present bias, and related to dynamic inconsistency. |
Identifiable victim effect |
The tendency to respond more strongly to a single identified person at risk than to a large group of people at risk. |
IKEA effect |
The tendency for people to place a disproportionately high value on objects that they partially assembled themselves, such as furniture from IKEA, regardless of the quality of the end result. |
Illusion of asymmetric insight |
People perceive their knowledge of their peers to surpass their peers' knowledge of them. |
Illusion of certainty |
See pseudo certainty effect |
Illusion of control |
The tendency to overestimate one's degree of influence over other external events. |
Illusion of external agency |
When people view self-generated preferences as instead being caused by insightful, effective and benevolent agents |
Illusion of transparency |
People overestimate others' ability to know them, and they also overestimate their ability to know others. |
Illusion of truth |
That people are more likely to identify as true statements those they have previously heard (even if they cannot consciously remember having heard them), regardless of the actual validity of the statement. In other words, a person is more likely to believe a familiar statement than an unfamiliar one. (Also called the "illusory of truth effect.") |
Illusion of validity |
Belief that furtherly acquired information generates additional relevant data for predictions, even when it evidently does not. |
Illusory correlation |
Inaccurately perceiving a relationship between two unrelated events.[47][48] |
Illusory superiority |
Overestimating one's desirable qualities, and underestimating one's undesirable qualities, relative to other people. Inability to self assess (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect") |
Immune neglect |
After an emotion-evoking event happens, people tend to rationalize or make sense of it, thereby damping its emotional impact. |
Impact bias |
People often overestimate the impact (both intensity and duration) of an affective event |
Impulsivity |
A major cause of sub-optimal decisions is impulsivity – the choice of an immediately gratifying option at the cost of long-term happiness. |
Inability to predict impact on self and others |
People who provide data in longtitudinal studies (Harvard women daily reporting on how happy, satisfied they were with their day and then answering a number of questions to evaluate what went on in the day (sleep, sex, work issues, etc.) see no correlation between the factors and their end of day assessment - and don't see it for others, either. Yet researchers can see at end of study how much of each factor actually affected mood, despite the fact that there is actually no correlation whatsoever between the actual impact on mood and people's reports about their mood. People except for extraordinary things (like a great report card) are very poor on predicting what will make them happy. People are very poor on predicting how things will affect them. |
Inability to self assess |
Ask a large group rate if their driving skills are the bottom half. Half should say yes, but probably 1 out 100 will say yes. The evidence is absolutely crushing that people basically start with a premise that they are at 70 percent and then go up or down from there. Famous psychologist: "How can I know what I don't know when I don't know what I don't know?" You have no way of judging what the universe of that domain is so your only guess is to say "I guess I only got 70% of it." Experts are the exception because they do know the domain. The more expert you are the better you perform, better than chance. In medicine, 50% chance to 60% expert on correct diagnosis) but far from perfect. (Also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect.") |
Inattention blindness |
A phenomenon in which people fail to notice unexpected stimuli in the world around them. |
Inconsistency |
The fallacy occurs when we accept an inconsistent set of claims, that is, when we accept a claim that logically conflicts with other claims we hold. |
Independent self trap |
In this trap we separate organism from environment, ourselves from our interdependence with others. |
Inductive conversion |
Improperly reasoning from a claim of the form "All As are Bs" to "All Bs are As" or from one of the form "Many As are Bs" to "Many Bs are As" and so forth. |
Inevitable antagonism trap |
In this trap we assume that there is inevitable conflict between persons, organisms, groups, nation-states. |
Information bias |
The tendency to seek information even when it cannot affect action. |
Ingroup bias |
A manifestation of our innate tribalistic tendencies. And strangely, much of this effect may have to do with oxytocin — the so-called "love molecule." This neurotransmitter, while helping us to forge tighter bonds with people in our ingroup, performs the exact opposite function for those on the outside — it makes us suspicious, fearful, and even disdainful of others. Ultimately, the ingroup bias causes us to overestimate the abilities and value of our immediate group at the expense of people we don't really know. |
Insensitivity to sample size |
The tendency to under-expect variation in small samples |
Insufficient statistics |
Drawing a statistical conclusion from a set of data that is clearly too small. |
Interview illusion |
See prediction using limited experience and information |
Inverse problem |
Difficulty in recreating a past state from the current results. Humpty Dumpty presents a folklore example. It's the arrow of time problem. A manifestation of the error of Platonicity, thinking that the Platonic form you have in your mind is the one you are observing outside the window. We don't know the statistical properties until after the fact. More theories, more distributions can fit a set of data. Possible models explode. |
Investment trap |
See sunk-cost fallacy |
Irrational escalation |
The phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong. Also known as the sunk cost fallacy. |
Irrationality |
Kahneman on Keith Stanovich research, which distinguishes between intelligence and rationality. Rationality is in effect the ability to deploy system two where it needed and to interfere with the mistakes that system one is apt to produce. |
Isolated problem trap |
In the grip of this trap we regard problems as unconnected to their wider contexts. |
Jumping to conclusions |
It is not always a mistake to make a quick decision, but when we draw a conclusion without taking the trouble to acquire enough of the relevant evidence, our reasoning uses the fallacy of jumping to conclusions, provided there was sufficient time to acquire and assess that extra evidence, and provided that the extra effort it takes to get the evidence isn't prohibitive. |
Just-world hypothesis |
The tendency for people to want to believe that the world is fundamentally just, causing them to rationalize an otherwise inexplicable injustice as deserved by the victim(s). |
Lag effect |
See spacing effect. |
Lake Wobegon effect |
Inability to self assess (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect") |
Lay rationalism |
Decision-makers strive to be rational but, paradoxically, the desire for rationality can lead to less rational decisions. When decision-makers try to 'do the rational thing', it can prevent them from choosing what they predict to be experientially optimal. |
Leniency error |
Inability to self assess (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect") |
Less-is-better effect |
The tendency to prefer a smaller set to a larger set judged separately, but not jointly |
Leveling and sharpening |
Memory distortions introduced by the loss of details in a recollection over time, often concurrent with sharpening or selective recollection of certain details that take on exaggerated significance in relation to the details or aspects of the experience lost through leveling. Both biases may be reinforced over time, and by repeated recollection or re-telling of a memory. |
Liking |
People are more likely to be persuaded or influenced by people that they like.“Despite the entertaining and persuasive salesmanship of the Tupperware demonstrator, the true request to purchase the product does not comefrom this stranger; it comes from a friend to every woman in the room...[Customers] buy from and for a friend rather than an unknown salesperson”(Cialdini, 2007, p.168) |
Line-Drawing |
If we improperly reject a vague claim because it is not as precise as we'd like, then we are using the line-drawing fallacy. Being vague is not being hopelessly vague. Also called the Bald Man Fallacy, the Fallacy of the Heap and the Sorites Fallacy. |
Loss aversion |
When directly compared or weighed against each other, losses loom larger than gains. This asymmetry between the power of positive and negative expectations has an evolutionary history. Organisms that treat threats as more urgent than opportunities have a better chance to survive and reproduce. A characteristic of System 1. |
Loss avoidance |
Paying more attention to avoiding loses than on gaining things. Another way to state the sunk cost fallacy. Psychologist Daniel Kahneman explains this in his book, Thinking Fast and Slow: Organisms that placed more urgency on avoiding threats than they did on maximizing opportunities were more likely to pass on their genes. So, over time, the prospect of losses has become a more powerful motivator on your behavior than the promise of gains. |
Low-ball procedure |
When the small and large requests are for the Same target behavior, the technique is called the “low-ball” procedure (Cialdini, Cacioppo, Bassett, & Miller, 1978). An example would be getting someone to agree to buy a car at a discounted price and then removing the discount. The initial decision to buy heightens willingness to buy later when the car is no longer a good deal. |
Ludic fallacy |
basing studies of chance on the narrow world of games and dice. The Bell curve (Gaussian distribution) is the application of the ludic fallacy to randomness. Is the belief that the unstructured randomness found in life resembles the structured randomness found in games. This stems from the assumption that the unexpected can be predicted by extrapolating from variations in statistics based on past observations, especially when these statistics are assumed to represent samples from a bell curve. These concerns are often highly relevant to financial markets, where major players use Value at Risk (VAR) models (which imply normal distributions) but market return distributions have fat tails. |
Matthew effect |
People take from the poor to give to the rich. An initial advantage follows someone and can compound through life. The effect of reputation mostly developed through luck. Benefitting from past successes. |
Medium- maximization |
Often when people exert effort to obtain a desired outcome, the immediate reward they receive is not the outcome itself, but a medium – an instrument or currency that they can trade for the desired outcome. For example, points in consumer loyalty programs and miles in frequent flyer programs are both such a medium. In decisions involving a medium, individuals often maximize the medium rather than their predicted experience with the ultimate outcomes. |
Memory bias |
Predictions of future experiences are often based on memories of related past experiences, but memory is fallible and introduces systematic biases into evaluations |
Mere exposure effect |
The tendency to express undue liking for things merely because of familiarity with them. |
Misinformation effect |
Memory becoming less accurate because of interference from post-event information.[88] |
Modality effect |
That memory recall is higher for the last items of a list when the list items were received via speech than when they were received through writing. |
Money illusion |
The tendency to concentrate on the nominal value (face value) of money rather than its value in terms of purchasing power. |
Mood-congruent memory bias |
The improved recall of information congruent with one's current mood. |
Moral credential effect |
The tendency of a track record of non-prejudice to increase subsequent prejudice. |
Moral luck |
The tendency for people to ascribe greater or lesser moral standing based on the outcome of an event |
More is better trap |
In this trap we assume that anything can be solved by application of more resources. |
Myopic loss aversion |
We note that the risk attitude of loss-averse investors depends on the frequency with which they reset their reference point, i.e. how often they 'count their money'. We hypothesize that investors have prospect theory preferences (using parameters estimated by Tversky and Kahneman, 1992). We then ask how often people would have to evaluate the changes in their portfolios to make them indifferent between the (US) historical distributions of returns on stocks and bonds? The results of our simulations suggest that the answer is about 13 months. This outcome implies that if the most prominent evaluation period for investors is once a year, the equity premium puzzle is 'solved'. We refer to this behavior as myopic loss aversion. The disparaging term 'myopic' seems appropriate because the frequent evaluations prevent the investor from adopting a strategy that would be preferred over an appropriately long time horizon. Indeed, experimental evidence supports the view that when a long-term horizon is imposed externally, subjects elect more risk. For example, Gneezy and Potters (1997) and Thaler et al. (1997) ran experiments in which subjects make choices between gambles (investments). The manipulations in these experiments are the frequency with which subjects get feedback. For example, in the Thaler et al. study, subjects made investment decisions between stocks and bonds at frequencies that simulated either eight times a year, once a year, or once every five years. The subjects in the two long-term conditions invested roughly two-thirds of their funds in stocks while those in the frequent evaluation condition invested 59% of their assets in bonds. Similarly, Benartzi and I ( forthcoming) asked staff members at a university how they would invest their retirement money if they had to choose between two investment funds, A and B, one of which was based on stock returns, the other on bonds. In this case the manipulation was the way in which the returns were displayed. One group examined a chart showing the distribution of one-year rates of return, and the other group was shown the simulated distribution of 30-year rates of return. Those who saw the one-year returns said they would invest a majority of their funds in bonds, whereas those shown the 30-year returns invested 90% of their funds in stocks.t |
Naïve cynicism |
Expecting more egocentric bias in others than in oneself |
Naïve diversification |
When asked to make several choices at once, people tend to diversify more than when making the same type of decision sequentially. |
Naive realism |
Human beings necessarily think that the world is as they perceive it to be. Can get us into trouble when different people come to that world with different histories, different needs, different biases, different experiences. |
Narrative fallacy |
Associated with our vulnerability to overinterpretation and prediliction for compact stories over raw truths. It severaly distorts our mental representation of the world; it is particularly acute when it comes to the rare event. The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation on them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. Where this propensity can go wrong is when it increases our impression of understanding. |
Narrow framing |
Myopic loss aversion is an example of a more general phenomenon that Kahneman and Lovallo (1993) call narrow framing; projects are evaluated one at a time, rather than as part of an overall portfolio. This tendency can lead to an extreme unwillingness to take risks. I observed an interesting illustration of this phenomenon while teaching a group of executives from one firm, each of whom was responsible for managing a separate division. I asked each whether he would be willing to undertake a project for his division if the payoffs were as follows: 50% chance to gain $2 million, 50% chance to lose $1 million. Of the 25 executives, three accepted the gamble. I then asked the CEO, who was also attending the session, how he would like a portfolio of 25 of these investments. He nodded enthusiastically. This story illustrates that the antidote for excessive risk aversion is aggregation, either across time or across different divisions. |
Negativity Bias |
status quo , is more likely to be taken as the reference point (Samuelson and Zeckhauser,1988), and deviations from this regarded as riskier, less desirable, or simply too much effort. Whether defaults are consciously designed into systems—from interfaces to pension schemes—or “established via historical precedent, perceptual salience, conspicuousness, or some other feature that qualitatively distinguishes one option from the rest” (Frederick, 2002, p.555), their existence can lead to a bias towards |
Negativity effect |
The tendency of people, when evaluating the causes of the behaviors of a person they dislike, to attribute their positive behaviors to the environment and their negative behaviors to the person's inherent nature. |
Next-in-line effect |
That a person in a group has diminished recall for the words of others who spoke immediately before himself, if they take turns speaking. |
No Limits Trap |
This trap assumes limitless resources and arenas for action. |
Non Sequitur |
When a conclusion is supported only by extremely weak reasons or by irrelevant reasons, the argument is fallacious and is said to be a non sequitur. However, we usually apply the term only when we cannot think of how to label the argument with a more specific fallacy name. Any deductively invalid inference is a non sequitur if it also very weak when assessed by inductive standards. |
Normalcy bias |
The refusal to plan for, or react to, a disaster which has never happened before. |
not averaging |
Simple math dictates that when people are attuned to reality at all, then averaging their judgments will yield an estimate that is more accurate than its individual components are on average. individual people can take advantage of the averaging principle by producing more than one estimate of whatever it is they are estimating. |
Not invented here |
Aversion to contact with or use of products, research, standards, or knowledge developed outside a group. Related to IKEA effect. |
Not thinking statistically |
People make all kinds of errors because they can't think statistically and don't realize the need for a control group. Nesbitt: Not using the scientific method and statistical inference to address problems. We tend to understand things that are countable and the law of large numbers when it applies to abilities - like any given Sunday any team can win but that "class will tell" over time for NFL teams - but we are not good at applying these concepts to personality traits. When you meet someone you see them as a fuzzy hologram of the person. Yet you are just seeing a very small slice of behavior of a very large population. |
Not using the unconscious |
You have a slave that's working for you all the time. That's your unconscious, and we don't take nearly as much use of this as we could. If you have a problem (writer needs to write something), you need to sit down and actually think about what you are going to do. (Mathematicians solving problems at a barbeque or whatever.) If you do, it's been handed over to the unconscious,and the unconscious is working on it 24 hours a day, no matter what you are doing. |
Observational selection bias |
This is that effect of suddenly noticing things we didn't notice that much before — but we wrongly assume that the frequency has increased. It’s a passive experience, where our brains seek out information that’s related to us, but we believe there’s been an actual increase in the frequency of those It's also a cognitive bias that contributes to the feeling that the appearance of certain things or events couldn't possibly be a coincidence (even though it is). |
Observer effects |
When experts approach a task like assessing DNA evidence, just as any other human beings, they can be influenced by what they expect to see or to some extent by what they desire to see. People who expect to see something and are highly motivated to see that thing are more likely to see it. They're more likely to interpret an ambiguous stimuli in a manner that's consistent with what they think or want to see. We all do this. Most of the time our use of expectations to help us interpret stimuli is very helpful because most the time our expectations are correct, but sometimes they aren't correct. The problem for a forensic expert is how to prevent this process of what's sometimes called Observer Effects, the tendency to see what one expects or desires to see, how to prevent that from coloring one's interpretation of the evidence in ways that undermine the quality of the evidence that's going to be presented to the jury. I think the best way to do that is to try to minimize the amount of contextual information that the expert receives. |
Observer- expectancy effect |
a form of reactivity in which a researcher's cognitive bias causes them to unconsciously influence the participants of an experiment. |
Omission bias |
The tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions). |
Opportunity costs |
people don't recognize that when they do anything they are paying an opportunity cost for it, because they could have been doing something else instead. They don't assess ahead of time, do I want to pay that cost and do this or do I want to do some other thing? |
Opposition |
Being opposed to someone's reasoning because of who they are, usually because of what group they are associated with. See the Fallacy of Guilt by Association. |
Optimism bias |
The tendency to be over-optimistic, overestimating favorable and pleasing outcomes (see also wishful thinking, valence effect, positive outcome bias). |
Order effect |
The effect on performance attributable to the order in which treatments were administered. participants perform differently because of the order in which they receive the treatment. A response order effect occurs when the distribution of responses to a closed-ended survey question is influenced by the order in which the response options are offered to respondents. Primacy and recency effects are two common types of response order effects. Primacy effects occur when response options are more likely to be chosen when presented at the beginning of a list of response options than when presented at the end. In contrast, recency effects occur when response options are more likely to be chosen when presented at the end of a list of response options than when presented at the beginning of the list. |
Ostrich effect |
Ignoring an obvious (negative) situation. |
Outcome bias |
The tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made. |
Outgroup homogeneity bias |
Individuals see members of their own group as being relatively more varied than members of other groups. |
Overconfidence effect |
Excessive confidence in one's own answers to questions. For example, for certain types of questions, answers that people rate as "99% certain" turn out to be wrong 40% of the time.[23][59][60][61] |
Paradox of choice |
People are worried they’ll regret the choice they made. lack of options actually helps consumers to avoid choice overload. By limiting the amount of options upfront, you will save up more willpower to make a better decision. Barry Schwartz. "When people have no choice, life is almost unbearable. As the number of available choices increases, as it has in our consumer culture, the autonomy, control, and liberation this variety brings are powerful and positive. But as the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates." |
Pareidolia |
A psychological phenomenon involving a vague and random stimulus (often an image or sound) being perceived as significant. Common examples include seeing images of animals or faces in clouds, the man in the moon or the Moon rabbit, and hearing hidden messages on records when played in reverse. |
Part-list cueing effect |
That being shown some items from a list and later retrieving one item causes it to become harder to retrieve the other items[90] |
Pattern recognition |
See pareidolia. |
Peak–end rule |
That people seem to perceive not the sum of an experience but the average of how it was at its peak (e.g. pleasant or unpleasant) and how it ended. |
Persistence of commitment |
See sunk-cost fallacy. |
Pessimism bias |
The tendency for some people, especially those suffering from depression, to overestimate the likelihood of negative things happening to them. |
Picture superiority effect |
The notion that concepts that are learned by viewing pictures are more easily and frequently recalled than are concepts that are learned by viewing their written word form counterparts.[91][92][93][94][95][96] |
Placebo effect |
The widely cited statistic, where about 30 or 33 percent of people who get a placebo get better. A placebo can be anything. If the treatment is a drug, a placebo can be something that looks like the drug but doesn't contain the active ingredient. If the treatment consists of the application of a machine like a hearing aid, a placebo would be something that looks like that hearing aid...but not turned on. When you say have a physiological reaction, the placebo effect is based on the patient's report of them feeling better, and that's to be distinguished from a physiological reaction. Although there are occasional isolated reports, there are really not good studies that have stood up to replication that said that a placebo could have a physiological effect in terms of producing tumor regression, for example. Rather, placebo effects are seen primarily in two areas: analgesia—that is, pain relief—the patient reports that they're in less pain; and depression—the patient reports less depression. These are all kinds of private symptoms. They're only available to the patient. They're not available to anybody else outside the patient, and so the patient has to report all of them. In terms of public symptoms—blood pressure, asthma symptoms, and things like that—placebo effects are much less pronounced. |
Planning fallacy |
The tendency to underestimate task-completion times.[49] A "manifestation of optimism." 'The probability of a rare event is most likely to be overestimated when the alternative is not fully specified" Research on NBA teams by Craig FOx. "The successful execution of a plan is specific and it's easy to imagine when one tries to forecast the outcome of a project. In contrast, the alternative of failure is diffuse, because because there are innumerable ways for things to go wrong. Entrepreneurs and the investors who evaluate their prospects are prone both to overestimate their chances and to overweight their estimates." |
Platonicity error |
Thinking that the Platonic form you have in your mind is the one you are observing out the window. See Inverse Problem. |
Positive expectation bias |
It's the sense that our luck has to eventually change and that good fortune is on the way. It also contributes to the "hot hand" misconception. Similarly, it's the same feeling we get when we start a new relationship that leads us to believe it will be better than the last one. |
Positivity effect |
That older adults favor positive over negative information in their memories. |
Post hoc interpretation |
Your mind is expert at creating stories to explain results after the fact. Indeed, there's a module in the left hemisphere of your brain that's so good at post-hoc answers that neuroscientists call it "the interpreter." The interpreter doesn't care about the process that led to the result, it just seeks to close the loop between cause and effect. You see an effect, come up with a plausible cause and all's good. Here's the key: the interpreter is really bad at knowing how much of the result is caused by skill and how much is from luck. But in order to makes good decisions, you have to know the difference. (also called the "false cause fallacy," the "post-hoc fallacy" or "Post hoc, ergo propter hoc," “after this, therefore because of this”) |
Post purchase rationalization |
See cognitive dissonance avoidance CHOICE-SUPPORTIVE BIAS (also called "post purchase rationalization," "rosy retrospection", and "Buyer's Stockholm syndrome") |
Power |
The researchers point to a fundamental truth in the world of business: unconstrained power can hinder decision-making. It is a truism that can be extended to political leaders as well. According to Nathanael Fast: "The aim of this research was to help power holders become conscious of one of the pitfalls leaders often fall prey to, The overall sense of control that comes with power tends to make people feel overconfident in their ability to make good decisions." What we found across the studies is that power leads to over-precision, which is the tendency to overestimate the accuracy of personal knowledge." The study found that subjects who were primed to feel powerful actually lost money betting on their knowledge while, those who did not feel powerful made less risky bets and did not lose money. According to Nathanel Fast: "This was one piece of puzzle, the idea that a subjective feeling of power leads to over-precision." The research team hypothesized that overconfidence among high-power individuals could be limited through blocking their subjective sense of power by directing attention to the limits of their personal competence. Yet again, the 'powerful' subjects lost more money but participants who had been led to doubt their own competence did not. Put another way, when subjects felt subjectively powerful they were at their most most vulnerable to overconfident decision-making. Nathanael Fast considers that the best decision-makers can find ways to avoid this vulnerabity: "The most effective leaders bring people around them who critique them. As a power holder, the smartest thing you might ever do is bring people together who will inspect your thinking and who aren't afraid to challenge your ideas." But, ironically, the study shows that the more powerful they become, the less help leaders think they need. Adam Galinsky concluded: "Power is an elixir, a self-esteem enhancing drug that surges through the brain telling you how great your ideas are. This leaves the powerful vulnerable to making overconfident decisions that lead them to dead-end alleys." |
Prediction with limited experience and information |
Job interviews and limited information gained from personal experience are non predictive selection procedures to predict performance. Confirmation bias based on resume? Using a one trial task - like a GRE exam - to predict future performance much worse than using long term measures of behavior (GPA, for example). |
Preferential attachment |
Power laws. The big get bigger and the small stay small, or get relatively smaller. (English as lingua franca.) |
Present bias |
CURRENT MOMENT BIAS (also called "present bias" and "hyperbolic discounting") |
Primacy effects |
The earlier a piece of information is presented, the more influential it is. primacy effect This refers to the process by which early information colours our perception of subsequent information. The commonsense notion that first impressions are the most compelling is not always correct. First impressions may count most because subsequent information is more difficult to absorb—although recent information may be remembered most clearly. See also RECENCY EFFECT and order effect. |
Priming effects |
Most of what we are thinking we are unconscious of. Trival things affect us. Ask someone to read something and introduce a fishy smell into the room and the person will think something's fishy here and not be as persuaded by it. (Does not work in countries that don't have the "something's fishy here" metaphor). |
Primus inter pares effect |
INABILITY TO SELF ASSESS (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect") |
Pro rata bias |
a subtle variation on the sunk-cost fallacy that Baliga calls the “pro rata bias.” This bias is a similarly “irrational” tendency to figure sunk costs into current and future decisions, with the imagined goal of amortizing those past sunk costs with inflated variable costs in the present. “Let’s say you’re a pharmaceutical company who spends a lot of money on R&D,” Baliga explains. “Now let’s also say that a new drug discovered by that R&D process costs pennies to produce. That R&D money is gone and not coming back, but because of the pro rata bias, my students want to charge $5 per pill because of those sunk costs.” |
Probability matching |
Even when told which choices will definitely bring the best results, people want to test the odds. Often, a fact really is a fact. “This [finding] indicates that more optimal choices can be eventually attained over a number of trials without actually telling people that they should choose the more likely outcome on every trial. However, in real life, how many bad decisions does that entail?” When humans are presented with identical choices, each associated with constant payoff likelihood, they tend to match their choices to the arranged probabilities instead of maximizing their payoffs by always choosing the outcome with the higher likelihood of reward. These deviations occur because humans are often forced to make quick judgments based on scant information, and because the judgments which are most adaptive or rapid are not always the most correct. |
Probability neglect |
Our inability to properly grasp a proper sense of peril and risk — which often leads us to overstate the risks of relatively harmless activities and to underrate more dangerous ones. |
Problem of induction |
How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know what we have observed from given objects and events suffices to enable us to figure out their other properties? There are traps built into any kind of knowledge gained from observation. Turkey fed by friendly humans every single day; every feeding firms up belief that friendly humans are looking out for its best interests. The day before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief. Similar to what happened to German Jews in the 1930s. |
Process-event trap |
This trap leads us into the error of thinking in terms of object-like "events" where we would do better to think in terms of processes. |
Pro-innovation bias |
The tendency to have an excessive optimism towards an invention or innovation's usefulness throughout society, while often failing to identify its limitations and weaknesses. |
Projection bias |
it's often difficult for us to project outside the bounds of our own consciousness and preferences. We tend to assume that most people think just like us — though there may be no justification for it. |
Projection bias |
People making predictions and people experiencing are often in different visceral (arousal) states. For example,predictors might be rested, satiated or sexually unaroused,whereas experiencers might be tired, hungry or aroused (or vice versa). When predictors in one visceral state make predictions about experience in another state, they project their own state into their predictions, as if the experiencers were also in that state. Projection bias occurs not only when experiencers are others but also when experiencers are predictors themselves. |
Prosecutor's fallacy |
A fallacy of statistical reasoning, typically used by the prosecution to argue for the guilt of a defendant during a criminal trial. It involves assuming that the prior probability of a random match equals the probability that the defendant is guilty. For instance, if a perpetrator is known to have the same blood type as a defendant and 10% of the population share that blood type, then to argue on that basis alone that the probability of the defendant being guilty is 90% makes the prosecutor's fallacy. The fallacy can arise from multiple testing, such as when evidence is compared against a large database. The size of the database elevates the likelihood of finding a match by pure chance alone; i.e., DNA evidence is soundest when a match is found after a single directed comparison because the existence of matches against a large database where the test sample is of poor quality may be less unlikely by mere chance.
|
Pseudocertainty effect |
The tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.[62] (also called the "illusion of certainty") |
Random outcomes contagion |
A person gets slightly ahead because of a random outcome and then people flock to him or his technique. |
Reactance |
The urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice (see alsoReverse psychology). |
Reactive devaluation |
Devaluing proposals only because they purportedly originated with an adversary. |
Recency effects |
the more recently a piece of information is presented, the more influential it is. A recency effect is the tendency for individuals to be most influenced by what they have last seen or heard, because people tend to retain the most complete knowledge about the most recent events. However, under certain circumstances, primacy effects prevail and sometimes the first rather than the last event will be the most influential. |
Reciprocation |
People feel indebted—obliged to reciprocate in some way—when someone appears to do them a favour, even if they did not ask for the favour in the firstplace. Cialdini (2007, p.22-24) discusses the Hare Krishna fundraising tactic of pressing ‘gifts’ such as a book or a flower into the hands of passersby, with the aim of provoking a reciprocal response such as a donation. |
Reframing |
See framing. |
Regression |
This fallacy occurs when regression to the mean is mistaken for a sign of a causal connection. Also called the Regressive Fallacy. It is a kind of false cause fallacy. |
Regression toward the mean |
See reversion to the mean. |
Regret aversion |
Of all the situations and feelings that motivate people to take action, regret is one of the most powerful. People don’t like to feel regret and will do a lot to avoid it. That’s not surprising, but you might be surprised to find out that the more opportunity people feel they have, the more regret they tend to feel about a situation. If people feel that they could have done something differently, the more regret they feel with their action or decision. If they feel that they had no choice in their decision or action, then they feel less regret. Related to this is the idea of whether there is a clear corrective action that could have been taken. If people feel they had a choice, and if they feel they had a clear, corrective action, and yet they don’t take that action, that is when they feel the most regret. Regret Inspires Action — Because people don’t like feeling regret, and because they feel the most regret about things they can fix, regret is actually a motivator for action. If people feel regret, then that’s when they are highly likely to take action. And people will often take an action to avoid regret before it happens. |
Relativity trap |
See anchoring effect. |
Representativeness heuristic |
Comparing our current situation to our prototype of a particular event or behavior. |
Restraint bias |
The tendency to overestimate one's ability to show restraint in the face of temptation. |
Retrospective distortion |
examining past events without adjusting for the forward passage of time. It leads to the illusion of posterior predictability. |
Reversing causation |
Drawing an improper conclusion about causation due to a causal assumption that reverses cause and effect. A kind of false cause fallacy. |
Reversion to the mean |
Systems that involve significant amounts of luck, such as investing for many people, revert to the mean for the group over time. Reversion to the mean occurs when an outcome that is extreme is followed by one that is closer to the average. Take six people at random and ask them to run a race of the same distance at two different times. Chances are good that the same person will win -- skill dominates. But if you buy last year's hot mutual fund, this year's results are more likely to be closer to the average than another sizzling gain. Tom Gilovich: "You see that in the Sports Illustrated jinx - you are on the cover when you are at the top and, unfortunately, we can't stay at the peak by definition, endlessly. Therefore, if that's when you're pictured, chances are, shortly afterwards, you're not going to be doing as well." |
Rewards |
When B.F. Skinner researched rewards he didn’t call them rewards. He called them “reinforcers”. In his research an effective reinforcer is anything that, when you give it, results in an increase in the desired behavior. Which means that what is an effective reward depends on what a particular person feels is an effective reward. The list of possible rewards or reinforcers is infinite. What is it that a person might want? Here are some common reinforcers: Oney, discounts, food, sex, attention, praise, love, fun. And on and on. In order to pick an effective reward, ideally you know your audience and you know what they want. If you haven’t done that research, then you will be using trial and error to figure out what an effective reward is for your audience. I suggest you do some research ahead of time so you know what to use as a reward for your particular audience and situation. |
Rhyme as reason effect |
Rhyming statements are perceived as more truthful. A famous example being used in the O.J. Simpson trial with the defense's use of the phrase "If the gloves don't fit, then you must acquit." |
Risk averse |
Daniel Kahneman: Prospect theory. Top left and bottom right cells of the fourfold pattern of preferences. A sure thing for a lesser gain - accepting an unfavorable settlement - is preferred to the high probablilty of a greater gain because of the fear of disappointment. A sure thing for a smaller loss - accepting an unfavorable settlement - is preferred to the low probability of a greater loss because of fear of a large loss. |
Risk Blindness |
Daniel Kahneman has given us evidence that we generally take risks not out of bravado but out of ignorance and blindness to probability! ...we tend to dismiss outliers and adverse outcomes when projecting the future. |
Risk compensation / Peltzman effect |
The tendency to take greater risks when perceived safety increases. |
Risk seeking |
Daniel Kahneman: Prospect theory. Bottom left and top rioght right cells of the fourfold pattern of preferences. A small chance of a large gain is preferred to a sure thing for a smaller gain - rejecting a favorable settlement - because of the hope of a large gain. A large chance of a large of a large loss is preferred to a sure thing of a smaller loss - rejecting a favorable settlement - because of hope to avoid a loss. |
Rosy retrospection |
The remembering of the past as having been better than it really was. A form of choice supportive bias (also called "post purchase rationalization," "rosy retrospection", and "Buyer's Stockholm syndrome.") |
Round trip fallacy |
Nassim Nicholas Taleb: "Confusing non-interchangeable statements, e.g. 'there is evidence of no possible Black Swans' with 'there is proof of no Black Swans.' Unless we concentrate very hard, are likely to unwittingly simplify the problem because our minds routinely do so without our knowing it. Many people confuse the statement 'almost all terrorists are Moslems' with 'almost all Moslems are terrorists.' Assume that the first statement is true, that 99 percent of terrorists are Moslems. This would mean that only about .001 percent of Moslems are terrorists, since there are more than one billion Moslems and only, say, ten thousand terrorists, one in a hundred thousand. So the logical mistake makes you (unconsciously) overestimate the odds of a randomly drawn individual Moslem person (between the age of, say, fifteen and fifty) being a terrorist by close to fifty thousand times!" |
Rule-based decisions |
Decision-makers sometimes base their choices on rules for 'good behavior' rather than predicted experience. Examples of such decision rules include 'seek variety', 'don’t waste', and 'don't pay for delays' (Amir and Ariely, unpublished). These rules might prevent decision-makers from choosing what they predict will produce the best experience. |
Salience biases |
These biases derive from findings that “colorful, dynamic, or other distinctive stimuli disproportionately engage attention and accordingly disproportionately affect judgments” (Taylor, 1982, p.192). This isperhaps intuitively true to designers, at least in terms of engaging attention. Explicit applications of colours for this purpose are numerous, but other dynamic stimuli have also been used, such as Ju andSirkin’s (2010) study using an information kiosk with a physical waving hand, gesturing to passers-by tointeract with the kiosk. The waving hand influenced twice as many people to interact with the kiosk asdid an on-screen animation of a waving hand. While it would be possible to see a novelty effect at work here, the technique was effective in the intended context. |
Sample bias |
See selection bias. |
Scandal of prediction |
Why on earth do we predict so much? Worse, even, and more interesting: Why don't we talk about our record in predicting? Why don't we see how we (almost) always miss the big events? |
Scarcity |
The scarcity principle suggests tha “opportunities seem more valuable to us when their availability is limited” (Cialdini, 2007, p.238). Whether scarcity is real or not in a situation, if it is perceived to be, people may value something more, and so alter their behaviour in response |
Scarcity heuristic |
Most skiers are familiar with the “powder fever” that seizes the public after a long-awaited snowstorm. Intent on getting first tracks down a favorite run, hordes of skiers flock to the lifts and the backcountry, often throwing caution to the wind as they compete with each other to consume the powder that is untracked for a limited time only. While this phenomenon is largely fueled by people’s enjoyment of powder skiing, it probably has deeper roots in our attitudes about personal freedom. A substantial body of research suggests that people react strongly, at times even aggressively, to any perceived restrictions to prerogatives they feel they are entitled to, regardless of whether or not they intend to exercise those prerogatives (see Pratkanis and Aronson, 2000, or Cialdini, 2001 for reviews). This principle, called psychological reactance, emerges at about the age of two and pervades the fabric of our social environment. In our everyday decision making, psychological reactance manifests itself as the scarcity heuristic: we tend to distort the value of opportunities we perceive as limited and to compete with others to obtain them. |
Scope neglect |
A cognitive bias that occurs when the valuation of a problem is not valued with a multiplicative relationship to its size. Scope neglect is a specific form of extension neglect. In one study, respondents were asked how much they were willing to pay to prevent migrating birds from drowning in uncovered oil ponds by covering the oil ponds with protective nets. Subjects were told that either 2,000, or 20,000, or 200,000 migrating birds were affected annually, for which subjects reported they were willing to pay $80, $78 and $88 respectively. Other studies of willingness-to-pay to prevent harm have found a logarithmic relationship or no relationship to scope size. Daniel Kahneman explains scope neglect in terms of judgment by prototype, a refinement of the representativeness heuristic. "The story probably evokes for many readers a mental representation of a prototypical incident, perhaps an image of an exhausted bird, its feathers soaked in black oil, unable to escape," and subjects based their willingness-to-pay mostly on that mental image. |
Selection bias |
Having a selected sample of people or events be non-representative of a population of people or events and believing the sample to be valid when it is not and therefore drawing conclusions that are not true. |
Selection factors |
We confuse selection factors with results Professional swimmers don’t have perfect bodies because they train extensively. Rather, they are good swimmers because of their physiques. How their bodies are designed is a factor for selection and not the result of their activities. Another good example is top performing universities: are they actually the best schools, or do they choose the best students, who do well regardless of the school’s influence? |
Selective perception |
The tendency for expectations to affect perception. |
Self Deception |
Normal, healthy people tend to have an unrealistically positive self image, to exaggerate or overestimate the control they have over their lives, and to be unreasonably optimistic. “Illusions, distortions, and self-deception appear to be integral to the way normal, well-adjusted people perceive the world,” says Roy Baumeister in the Meanings of Life. “Seeing things as they really are is associated with depression and low self-esteem.” Where self-deception fails, there are other routes to escape the painful self. Elsewhere, Baumeister argues that behaviors such as alcoholism, binge eating, sexual masochism, charismatic religion, spirituality, and even suicide function as escapes from the overburdened, embarrassed, shameful modern self. |
Self reference problem |
If we need data to obtain a probability distribution to guage knowledge about the future behavior of the distribution from its past results, and if, at the same time, we need a probability distribution to guage data sufficiency and whether it is not predictive of the future, then we face a sever regress loop. This is a problem of self reference...since a probability distribution is used to assess the degree of truth but cannot rely on its own degree of truth and validity. |
Self-relevance effect |
That memories relating to the self are better recalled than similar information relating to others. |
Self-serving bias |
The tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias). |
Semmelweis reflex |
The tendency to reject new evidence that contradicts a paradigm. |
Sense of relative superiority |
Inability to self assess (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect.") |
Serial position effects |
Biases where a person overweighs or underweighs evidence based on theorder in which it is presented. As Baron (1994, p.283) notes, “[w]hen the order in which we encountertwo pieces of evidence is not itself informative, the order of the two pieces of evidence should have noeffect on our final strength of belief. Or, put more crudely, ‘When the order doesn’t matter, the order shouldn’t matter.’” Nevertheless, order can matter. |
Shared information bias |
Known as the tendency for group members to spend more time and energy discussing information that all members are already familiar with (i.e., shared information), and less time and energy discussing information that only some members are aware of (i.e., unshared information). |
Similarity matching |
See categorization. |
Single effect trap |
In this trap we think that we can cause a single effect with no "side-effects." |
Sleep deprivation |
leads to a higher rate of perception errors and bad decisions. |
Social comparison bias |
The tendency, when making hiring decisions, to favour potential candidates who don't compete with one's own particular strengths. |
Social desirability bias |
The tendency to over-report socially desirable characteristics or behaviors in one self and under-report socially undesirable characteristics or behaviors. |
Social proof heuristic |
The social proof heuristic is the tendency to believe that a behavior is correct to the extent that other people are engaged in it. Cialdini (2001) provides a comprehensive review of research supporting the idea that others’ behavior and even mere presence has a powerful influence on our decisions. In general, we rely on the social proof heuristic most when we are uncertain and when others similar to ourselves are engaged in an activity. Tremper (2001) considers this heuristic to be one of the major causes of avalanche accidents. |
Solve it by redefining it trap |
This could be called the Definition Can Do It Trap in that it attempts to solve problems by redefinition alone. |
Sorites fallacy |
If we improperly reject a vague claim because it is not as precise as we'd like, then we are using the line-drawing fallacy. Being vague is not being hopelessly vague. Also called the Bald Man Fallacy, the Fallacy of the Heap and the Sorites Fallacy. |
Source confusion |
Confusing episodic memories with other information, creating distorted memories. |
Spacing effect |
Information is better recalled if exposure to it is repeated over a long span of time rather than a short one. |
Spotlight effect |
The tendency to overestimate the amount that other people notice your appearance or behavior. |
Statistical regress argument |
Nassim Nicholas Taleb explains, "Say you need past data to discover whether a probability distribution is Gaussian, ractal, or something else. You will need to establish whether you have enough data to back up your claim. How do we know if we have enough data? From the probability distribution - a distribution does tell you whether you have enough data to "build confidence" about what you are inferring. If it is a Gaussian bell curve, then a few points will suffice (the law of large numbers...). And how do you know if the distribution is Gaussian? Well, from the data. So we need the data to tell us what the probability distribution is, and a probability distribution to tell us how much data we need. This causes a severe regress argument. This regress does not occur if yo uassume beforehand that the distribution is Gaussian. It happens that, for some reason, the Gaussian yields its properties rather easily. ...assuming its application beforehand may work with a small number of fields such as crime statistics, mortality rates [etc.]. But not for historical data of unknown attributes or in Extremistan [situations in which inequalities are such that one observation can disproportionately impact the aggregate, or the total - 1,000 people including Bill Gates: he would represent 99.9 percent of the wealth of the group]. We need data to discover a probability distribution. How do we know we have enough data? We need a probabilty distribution to tell how much data is enough! This causes a severe regress argument, which is somewhat shamelessly circumvented by resorting to the Gaussian and it’s kin. Also known as the problem of 'circularity of statistics.'" |
Status-quo bias |
We humans tend to be apprehensive of change, which often leads us to make choices that guarantee that things remain the same, or change as little as possible. Needless to say, this has ramifications in everything from politics to economics. We like to stick to our routines, political parties, and our favorite meals at restaurants. Part of the perniciousness of this bias is the unwarranted assumption that another choice will be inferior or make things worse. |
Stereotypes |
The human mind is so wedded to stereotypes and so distracted by vivid descriptions that it will seize upon them, even when they defy logic, rather than upon truly relevant facts. Here’s an example to illustrate the mistake, from researchers Daniel Kahneman and Amos Tversky: In 1983 Kahneman and Tversky tested how illogical human thinking is by describing the following imaginary person: Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations. The researchers asked people to read this description, and then asked them to answer this question: Which alternative is more probable? Linda is a bank teller. Linda is a bank teller and is active in the feminist movement. Here’s where it can get a bit tricky to understand (at least, it did for me!)—If answer #2 is true, #1 is also true. This means that #2 cannot be the answer to the question of probability. Unfortunately, few of us realize this, because we’re so overcome by the more detailed description of #2. Plus, as the earlier quote pointed out, stereotypes are so deeply ingrained in our minds that subconsciously apply them to others. Roughly 85% of people chose option #2 as the answer. A simple choice of words can change everything. Again, we see here how irrational and illogical we can be, even when the facts are seemingly obvious. |
Stimulated limbic system |
Leads to a higher rate of perception errors and bad decisions. |
Subadditivity effect |
The tendency to judge probability of the whole to be less than the probabilities of the parts. |
Subject-expectancy effect |
A form of reactivity that occurs in scientific experiments or medical treatments when a research subject or patient expects a given result and therefore unconsciously affects the outcome, or reports the expected result. |
Subjective probability |
Different people can, while being rational, assign different probabilities to different states of the world. |
Subjective validation |
Perception that something is true if a subject's belief demands it to be true. Also assigns perceived connections between coincidences. |
Suggestibility |
A form of misattribution where ideas suggested by a questioner are mistaken for memory. |
Sunk Cost Fallacy |
The sunk-cost effect is the tendency to persist in achieving a goal due to already committed expenditure, even when the prognosis is poor. “self-justification.” According to this view, we persist in a failing action because we are justifying our previous decision. |
Superiority bias |
Inability to self assess (also called "illusory superiority," "above average effect," "superiority bias," "leniency error," "sense of relative superiority," "primus inter pares effect," "Dunning–Kruger effect" and "Lake Wobegon effect.") |
Survivorship bias |
Concentrating on the people or things that "survived" some process and inadvertently overlooking those that didn't because of their lack of visibility. Can make the results of intelligence investigations invalid because the weak die and are thus not included in the findings - skewing results toward the successes still alive. And randomness may well be the explanatory culprit with such intelligence. Given a large enough starting sample and at least a small likelihood of success, many survivors might be expected by chance alone. |
System justification |
The tendency to defend and bolster the status quo. Existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest. (See also status quo bias.) |
Telescoping effect |
The tendency to displace recent events backward in time and remote events forward in time, so that recent events appear more remote, and remote events, more recent. |
Texas sharpshooter fallacy |
The Texas sharpshooter fallacy is seeing a result and mistakenly attributing it to an action. The story around the Texas sharpshooter fallacy is that a sharpshooter proposed that he could hit three unseen targets on the side of his far away barn. Indeed, he aimed his rifle and sqeezed off three shots, and then left to inspect the result. When the witnesses arrived after him, they saw three circles painted on the barn with a bullet hole at the center of each. They concluded that each of the sharpshooter's three shots had hit a target. What they did not know is that when the sharpshooter arrived at his barn ahead of them, he painted a circle around each bullet hole. The result they saw from the shots was not attributable to the sharpshooter's aim. They saw what they wanted to see and what the sharpshooter wanted them to see.) |
There's got to be a winner trap |
This trap is the misapplication of the idea of a winner and loser to situations where it is not applicable. |
Time-saving bias |
Underestimations of the time that could be saved (or lost) when increasing (or decreasing) from a relatively low speed and overestimations of the time that could be saved (or lost) when increasing (or decreasing) from a relatively high speed. |
Tournament effect |
Inequality; someone who is marginally better can easily win the entire pot. |
Traditional wisdom |
If you say or imply that a practice must be OK today simply because it has been the apparently wise practice in the past, then your reasoning contains the fallacy of traditional wisdom. Procedures that are being practiced and that have a tradition of being practiced might or might not be able to be given a good justification, but merely saying that they have been practiced in the past is not always good enough, in which case the fallacy is present. Also called argumentum consensus gentium when the traditional wisdom is that of nations. |
Trait ascription bias |
The tendency for people to view themselves as relatively variable in terms of personality, behavior, and mood while viewing others as much more predictable. |
Type 1 error |
Thinking an effect of an experiment or process is real when it is just the result of chance. Why scientists tend to use the 95% confidence interval. Asks us to play dumb and start from scratch when we really don't know what's going on. |
Type 2 error |
Thinking an effect of an experiment or process is just the result of chance when it actually is real. argument for not using the 95% confidence interval when we are not dumb, as in the case of climate change: leads us to understate threats. |
Ultimate attribution error |
Similar to the fundamental attribution error, in this error a person is likely to make an internal attribution to an entire group instead of the individuals within the group. |
Unawareness of cognitive process |
There is no such thing as awareness of cognitive process. We claim it, but we don't claim that we have awareness of the perceptive processes that we have, we have absolutely no idea. The procedures that we use to solve problems are often completely opaque to us. |
Unawareness of thought |
See useless introspection. |
Undecidability |
Given that the less frequent the event, the more severe the consequences (hundred year flood versus 10 year flood; best seller of the decade versus bestseller of the year), our estimation of the contribution of the rare event is going to be massively faulty (contribution is probability times effect; multiply that by estimation error); and nothing can remedy it. |
Underestimating the importance of luck |
We too often underestimate the importance of luck in the outcomes of our decisions, employing Nobel Prize-winner Daniel Kahneman's observation that success requires some talent and some luck, while great success requires some talent and a lot of luck. |
Unit bias |
The tendency to want to finish a given unit of a task or an item. Strong effects on the consumption of food in particular. |
Unknowledge |
Our systematic underestimation of what the future has in store. |
Unrepresentative generalization |
A fallacy produced by some error in the process of generalizing. See hasty generalization or unrepresentative generalization for examples. Also called faulty generalization and biased statistics. |
Useless introspection |
We have problems figuring out what's going on in our heads. Research shows the later you view something, the higher your evaluation of it. In experiments comparing a series of items for possible purchase, allthe items are identical except that they look slightly different. When people were asked if the order had any influence on their choice, they could not believe that it did - even thought the experiments show order mattered. Richard Nisbett: "Having people memorize a series of word pairs make it more likely they will then select something related to one of the word pairs. Memorize ocean-moon, then ask them to name a detergent. Doubles or triples likelihood they name 'Tide.' But they have no idea this has occured. (Controls validate this.) "Richard Nisbett: "People who are asked to explain why they like something do a worse job of predicting how much they will like it down the road than those who do not verbalize why they like something and just evaluate it. If you are asked to explain why you like something, you are going to focus on just those things that are verbalizable, so you are going to miss all that are not verbalizable. All the other reasons you might like or dislike it go by the wayside. After explaining and then asking later if they like something or not, 'correlation goes to heck' as to whether they will like it or not." |
Vivid descriptions |
information that is vivid, concrete, and personal has a greater impact on our thinking than pallid, abstract information that may actually have substantially greater value as evidence. |
Vivid representation |
Daniel Kahneman: "…low sensitivity to probability...for emotional outcomes is normal." "A rich and vivid representation of the outcome, whether or not it is emotional, reduces the rope of probability in the evaluation of an uncertain prospect." "…prediction: adding irrelevant but vivid details to a monetary outcome also disrupts calculations." |
Von Restorff effect |
That an item that sticks out is more likely to be remembered than other items. |
Well travelled road effect |
Underestimation of the duration taken to traverse oft-traveled routes and overestimation of the duration taken to traverse less familiar routes. |
Worse-than-average effect |
A tendency to believe ourselves to be worse than others at tasks which are difficult. |
Zeigarnik effect |
That uncompleted or interrupted tasks are remembered better than completed ones. |
Zero-risk bias |
Preference for reducing a small risk to zero over a greater reduction in a larger risk. (Also called "certainty bias", "certainty effect" and "100% effect.") |
Zero-sum heuristic |
Intuitively judging a situation to be zero-sum (i.e., that gains and losses are correlated). Derives from the zero-sum game in game theory, where wins and losses sum to zero. The frequency with which this bias occurs may be related to the social dominance orientation personality factor. |
Zipf's law |
The more you use a word, the less effortful you will find it to use that word again, so you borrow words from your private dictionary in proportion to their past use. the frequency of occurence of some event as a function of its rank is a power-law function. |