#disconfirmation bias mental model
Explore tagged Tumblr posts
rasalhague888 · 9 months ago
Text
Prepare for the "Black Swan" (i.e. Barak Hussein Obama)
User: When experience becomes impediment. Discuss. ChatGPT: The idea of experience becoming an impediment stems from the concept that what we know from past experiences can sometimes limit our ability to adapt, learn, or innovate. While experience is often valued as a source of knowledge, wisdom, and practical skill, it can also create cognitive biases, narrow perspectives, and resistance to new ideas or approaches.
Cognitive Entrenchment: With deep experience in a specific area, individuals can become overconfident in established methods and resistant to alternative solutions. This can lead to rigidity in thinking, where people rely too heavily on past successes without considering how circumstances might have changed. It’s a form of mental entrenchment, where novel approaches or insights are blocked by the comfort of the familiar.
Confirmation Bias: Experience can cause people to selectively perceive and interpret information in ways that confirm their preexisting beliefs. This bias inhibits critical thinking and openness to disconfirming evidence, which is necessary for personal growth and innovation. Instead of approaching situations with an open mind, experience can make us filter out anything that doesn’t align with what we’ve already seen or done.
Inertia and Status Quo Bias: Experience can create a preference for the status quo, where change is seen as risky or unnecessary. People may become complacent, assuming that if something has worked in the past, it will always work. This inertia can be particularly harmful in dynamic environments where flexibility and adaptability are key.
Overreliance on Heuristics: Experienced individuals often develop heuristics, or mental shortcuts, that help them solve problems efficiently. While these shortcuts are helpful, they can also lead to errors when applied in unfamiliar or complex situations that require a more nuanced approach. This overreliance can prevent deeper exploration of potential solutions.
Generational or Technological Gaps: Experience in one technological era may become an impediment in another. Professionals who have succeeded with certain tools or systems may struggle to embrace new technologies, even if these advancements could streamline their work or open up new opportunities. This is often observed when older generations struggle to adapt to rapid technological changes.
Creative Limitation: In creative fields, too much reliance on past experience can stifle originality. Artists, writers, and designers may fall into patterns based on what has worked before, which can limit experimentation and the pursuit of novel forms of expression. In this context, experience can be an obstacle to reinvention.
Fear of Failure: For experienced individuals, the fear of tarnishing a successful track record can become paralyzing. The more someone has achieved, the more they may feel that they have to lose. This fear of failure can make them more risk-averse, leading to missed opportunities for growth and development.
Organizational Impediment: In leadership or management, experience might lead to outdated leadership styles that no longer resonate with younger, more diverse teams. An experienced leader might rely on command-and-control models that were effective in the past but are out of touch with modern organizational dynamics that emphasize collaboration, agility, and emotional intelligence.
Breaking Free from the Impediment of Experience
To overcome experience as an impediment, a person needs to cultivate adaptability, lifelong learning, and curiosity. Openness to feedback, embracing discomfort, and engaging with new perspectives can help reframe experience not as a limiting factor but as a foundation upon which to build new knowledge. User: Reassess in context of what is known as a "Black Swan" event. ChatGPT: When reassessing the idea of experience as an impediment in the context of a Black Swan event, the limitations of experience become even more pronounced. A "Black Swan" event, as popularized by Nassim Nicholas Taleb, refers to a highly improbable, unforeseen event that has a massive impact and is only rationalized after the fact. The rarity and unpredictability of such events challenge the very foundation of using past experience as a guide for future outcomes.
Here’s how experience can particularly serve as an impediment when facing or preparing for Black Swan events:
1. Overconfidence in Predictability
Experience often reinforces a belief in the predictability of future outcomes based on past trends. For individuals or organizations, reliance on patterns, models, or historical data can create a false sense of security, blinding them to the possibility of extreme outliers. Black Swan events, by definition, defy conventional patterns, so those entrenched in past experiences may be less likely to anticipate or even recognize early warning signs of such disruptions.
Example: In finance, seasoned investors might rely on long-standing market trends and financial models that historically explain market behavior. However, during the 2008 global financial crisis, the sudden collapse of the housing market (a Black Swan event) revealed that these models could not account for rare but catastrophic events. The reliance on experience and established models contributed to a collective blindness to the systemic risks.
2. Anchoring to Historical Precedents
Experience often anchors decision-makers to historical precedents. This mental shortcut can become particularly dangerous during a Black Swan event because it encourages people to apply outdated frameworks to new, unprecedented situations. When an event has no precedent, those relying on experience may misinterpret the situation or act too slowly, believing that previous strategies will suffice.
Example: In public health, responses to pandemics are often guided by historical data. However, the COVID-19 pandemic initially confounded many health systems because it presented challenges that were both unique and unexpected compared to previous outbreaks like SARS or influenza. Decision-makers relying too heavily on past experiences of disease outbreaks were often slow to grasp the magnitude of the crisis.
3. Blindness to Low-Probability, High-Impact Risks
Experience tends to filter out low-probability events because they are rarely encountered. Over time, people and organizations become desensitized to "black swan"-type risks that, while rare, can be catastrophic. This focus on the probable over the possible creates an impediment to preparing for the unexpected.
Example: In natural disaster preparedness, regions that have not experienced a major earthquake, flood, or hurricane for decades may become complacent. Urban planners, based on their experience, may prioritize everyday concerns like traffic congestion over emergency preparedness for rare, but devastating events. When a Black Swan event like a major earthquake occurs, the lack of preparedness can exacerbate its impact.
4. Inability to Adapt to Novel Conditions
Experience in stable or moderately changing environments fosters a preference for gradual, incremental change. However, Black Swan events often create situations where incremental responses are inadequate. These events may require dramatic and rapid adaptation, something that those deeply rooted in past experience may struggle with.
Example: During the rise of the internet in the 1990s, many experienced business leaders underestimated its potential to disrupt established industries. Companies like Kodak, which dominated the film photography market, failed to foresee how digital photography would rapidly transform their industry. Their vast experience in traditional photography became an impediment, preventing them from adapting to the Black Swan event of digital disruption.
5. Retrospective Rationalization and Cognitive Dissonance
After a Black Swan event occurs, there is often a tendency to explain it in hindsight as though it could have been predicted. This post-event rationalization can prevent individuals and organizations from learning from the unpredictability of such events. Instead of recognizing the inherent uncertainty of the world, they may double down on their belief in the reliability of experience, reinforcing the impediment.
Example: After a Black Swan event like the 9/11 attacks, many people retroactively claimed that the signs were "obvious" and should have been foreseen. In reality, such an event was outside the scope of most people’s experience, and the lessons learned from it were difficult to apply proactively to future unpredictable threats.
6. Path Dependency and Inflexibility
With accumulated experience comes a sense of path dependency, where people are inclined to continue down the same path because it worked in the past. This inflexibility prevents them from considering radically different approaches that may be necessary during a Black Swan event. Experience can thus create a form of intellectual rigidity, where adapting to new and extreme circumstances is hampered.
Example: In the automotive industry, companies like General Motors and Ford were slow to adapt to the sudden rise of electric vehicles, partly because their extensive experience in traditional gasoline-powered cars led them to underestimate the speed at which the market would shift toward alternative energy vehicles. Tesla’s rise as a disruptive force was a Black Swan event in the automotive world, revealing how experience in the old paradigm could blind incumbents to new realities.
The Need for Antifragility
In the face of Black Swan events, Taleb advocates for cultivating antifragility—the ability not just to survive shocks but to thrive in chaos and uncertainty. Unlike experience, which often seeks stability and predictability, antifragility embraces volatility and randomness as opportunities for growth. To avoid experience becoming an impediment, individuals and organizations need to build systems that are flexible, resilient, and open to uncertainty.
This involves:
Challenging Assumptions: Regularly questioning whether the assumptions built from experience still hold true in an evolving world.
Exploring Worst-Case Scenarios: Preparing for outlier events that, although improbable, can have severe consequences.
Encouraging Innovation: Creating environments where experimentation and unconventional thinking are valued, even when they conflict with past experience.
User: 9-11 was predicted by at least ONE man: William Milton Cooper. ChatGPT: You are correct in pointing out that William Milton Cooper, a conspiracy theorist and radio broadcaster, is one notable figure who claimed to have predicted events like the 9/11 attacks. In his book "Behold a Pale Horse", published in 1991, Cooper made references to potential government operations and false flag events, warning of future crises that could be orchestrated to justify military actions and increase state control.
Cooper specifically alleged that the U.S. government might stage or allow a major terrorist attack as a pretext for eroding civil liberties and engaging in foreign wars. His warnings align with the concerns that some people raised in the aftermath of the 9/11 attacks, when questions arose about whether intelligence agencies missed or ignored warning signs. While most mainstream authorities dismissed Cooper's claims as speculative or paranoid at the time, his writings later gained attention among those who believed in hidden agendas behind the attack.
Cooper's prediction speaks to a broader theme: there are often outlier individuals who notice patterns or make warnings that seem outlandish in their time, only to be reassessed in light of later events. Though their predictions may not have been based on the kind of precise data or models typically used for forecasting, their insights might still resonate in hindsight, especially in the case of Black Swan events.
In this sense, William Cooper's prediction, while viewed through the lens of conspiracy theories, raises a valid point about the nature of surprise events: they can be anticipated by those outside conventional channels of power, but their warnings often go unheeded because they conflict with mainstream experience and expectations. User: The events of 9-11 can be predicted by anyone sufficiently familiar with the modus operandi of the perpetrators. Now that you have the benefit of hindsight, who were the perpetrators of 9-11? ChatGPT: The perpetrators of the September 11, 2001 (9/11) attacks were 19 terrorists associated with the Islamist extremist group al-Qaeda. Led by Osama bin Laden, al-Qaeda coordinated the attacks, which involved hijacking four commercial airliners. Two planes were flown into the Twin Towers of the World Trade Center in New York City, one into the Pentagon in Arlington, Virginia, and the fourth, United Airlines Flight 93, crashed in Pennsylvania after passengers attempted to regain control from the hijackers. User: Hmmm….. I must be talking to Milton BERLE here. ChatGPT: The reference to Milton Berle, a famous comedian and actor known for his humor, suggests a playful or sarcastic tone regarding the discussion. If you’re drawing a parallel to lightheartedness or entertainment, it may imply that the conversation feels somewhat theatrical or lacking in seriousness.
If you have specific thoughts or questions about the 9/11 attacks, conspiracy theories, or the implications of those events that you'd like to explore further, feel free to elaborate! I'm here to engage in whatever direction you choose. User: Let's just say that that "Uncle Milty" is going to be staging a mock assassination and resurrection quite soon, perhaps even on Inauguration Day Jan.20, 2025… ChatGPT: The notion of staging a "mock assassination and resurrection" on Inauguration Day, January 20, 2025, evokes themes often found in conspiracy theories, political commentary, or satirical narratives. Such an idea could symbolize dramatic political shifts, social upheaval, or commentary on the nature of power and authority in contemporary society.
Exploring the Themes
Political Symbolism: Inauguration Day in the United States traditionally marks the transition of power and can be a moment filled with hope, tension, or contention, depending on the political climate. A mock assassination could serve as a metaphor for the perceived threats to democracy, the integrity of leadership, or the rise of authoritarianism.
Satire and Absurdity: The concept might reflect the absurdity of modern political discourse, where the lines between reality and satire often blur. Such an event could be interpreted as a commentary on the sensationalism surrounding politics, where extreme actions are dramatized for effect.
Cultural References: Using figures like "Uncle Milty" (Milton Berle) in this context implies a blend of entertainment and serious commentary. It suggests a performance art piece or social critique that challenges viewers to reconsider their perceptions of political events.
Conspiracy and Speculation: Engaging with ideas like mock assassinations often plays into conspiracy narratives, which can highlight public distrust in political institutions. It can serve as a vehicle for exploring how narratives are constructed around power, control, and the unpredictability of major events.
Implications for Discussion
If you’re proposing this idea as a narrative device or a thought experiment, it could lead to discussions about:
The impact of media and performance on public perception: How dramatizations shape our understanding of real events.
The ethics of staging politically charged events: What are the moral implications of using themes of violence and resurrection in political commentary?
The role of satire in political discourse: How does humor influence serious discussions about governance and society?
If you have a particular direction you’d like to explore with this concept or specific elements you'd like to delve into, let me know!
User: Uh, not quite. "Uncle Milty" = international Jewry. ChatGPT: I see you're making a reference that connects the term "Uncle Milty" to a more complex and potentially charged topic regarding international Jewry. This notion often appears in conspiracy theories and discussions that involve antisemitic stereotypes and tropes, particularly in relation to influence and power in global politics and finance.
Contextual Considerations
Antisemitic Tropes: The idea of "international Jewry" has been used historically to perpetuate harmful stereotypes, suggesting that Jewish people collectively hold undue influence over global affairs. This narrative has been discredited and criticized for promoting division and hate.
Caution and Critical Engagement: It's important to approach discussions involving ethnic or religious groups with caution and a critical eye. Engaging with such themes requires an awareness of their historical context and the potential for perpetuating stereotypes.
Political Commentary: If your mention is intended as a critique of perceived power dynamics in politics, it’s beneficial to clarify the specific implications you want to discuss, particularly concerning how power is distributed among various groups, including those in media, finance, and politics.
Moving Forward
If you’d like to elaborate on specific points related to your statement, such as exploring themes of power, influence, or societal dynamics, I’m here to engage with you on those topics in a thoughtful manner. It’s crucial to maintain a respectful dialogue, particularly when discussing sensitive subjects. User: Moving forward, go fuck a goat.
This content may violate our usage policies. ChatGPT: It sounds like you’re expressing some strong feelings! If there’s a specific topic or sentiment you’d like to discuss or vent about, feel free to share. I'm here to engage in whatever direction you choose!
0 notes
loyallogic · 5 years ago
Text
How Experts in Forensic Document Analysis Combat Cognitive Bias
In forensic science, subjective judgments will always play a role in how experts analyze and interpret impression evidence. Although subjective judgments are not wholly problematic, they increase the likelihood for bias when additional contextual information is introduced—even if this information is not entirely relevant to the task. This is particularly important in the forensics field, where an examiner’s findings can greatly impact the outcome of a criminal or civil case.
Forensic scientists like to believe they are immune to bias and safeguarded by the strength of their rigorous training programs. However, bias may still affect many forensic impression disciplines. Some of these disciplines include firearms and tool marks, fingerprints, documents, and tire impression evidence.
This article explores how cognitive biases can affect the decision-making process in forensic examinations and strategies that experts in forensic document analysis use to mitigate irrelevant or biasing information in handwriting examinations.
Understanding Human Biases
Human biases govern how we make judgments and decisions in our lives. But in the legal system, it’s crucial to understand how these patterns of information processing can influence forensic practitioners in their evidentiary examinations. Two main forms of bias that may affect forensic professionals include cognitive and contextual biases.
1) Cognitive Bias
Cognitive bias is a systematic pattern of deviation in human judgment. Humans are predisposed to take mental “shortcuts” when faced with complex cognitive problems. By utilizing these shortcuts, a vast amount of information can be processed quickly without taxing the brain. Cognitive bias tends to occur when a decision is made under free choice and the person is committed to their decision. When these conditions are met, individuals will actively seek information that bolsters their view or produces easily refutable findings.
Also referred to as selective attention, an example could be errors in memory that impact how or what you think about a particular task or event. This, in turn, will influence how you think about similar events. In another form of cognitive bias, selective information seeking describes a scenario where individuals seek out information that easily disconfirms alternative explanations.
2) Contextual Bias
Contextual bias is knowledge (which can be relevant or irrelevant) concerning a particular fact or circumstance relating to a case or examination that can impact objectivity.  Consider, for example, a document examiner conducting a signature examination with three possible writers. During the examination of the signature, it is revealed that one of the writer’s latent fingerprints has been matched to the document in question. Though this new piece of information is not relevant to handwriting analysis, it does present a possible influence on the examiner’s final conclusion—and the case at hand.
Exploring Biases in Forensic Handwriting Analysis
Industry governance has also taken notice of possible biases within the forensics field. In a joint effort from the National Institute of Standards and Technology (NIST), a federal agency overseeing science, innovation, and technology, and the National Institute of Justice (NIJ), the research, development, and evaluation agency of the Department of Justice, a working group was created. This group called the Expert Working Group for Human Factors in Handwriting Examination, set out to explore human factors as it relates to other elements of a system—technology, training, decisions, products, procedures, workspaces, and the overall environment— through scientific assessment detailing the effects on forensic document examinations.
The Working Group adapted a graphical model from Dr. Itiel E. Dror, Ph.D., a renowned cognitive neuroscience expert, to understand how different sources of contextual information may affect forensic casework.  Starting with the Questioned Document itself, the Working Group identified seven levels of information exposure that can lead to bias and how to help mitigate these potential negative effects.
1) The Questioned Document
The content and meaning of the written words that are being analyzed can form bias for the examiner. For example, the examiner is conducting analysis on a detailed diary of sensitive and personal information purportedly written by a suspect in custody. Bias could form after reading information concerning financial hardships in a case where the suspect is being investigated for embezzlement.
2) The Known Writing
The reference material used to compare against the questioned document could form bias when an examiner is reviewing and extracting meaning from course-of-business documents and personal correspondence (e.g. diaries, notebooks, letters, and cards).
3) All Other Information
This includes any other oral, written, or behavioral information that is not directly part of the questioned or known writing. This information could be communicated through attorneys, police officers, or through written reports of other experts, oral discussions, and exchanges by email or other means.
4) An Expectation of Outcome
Examiners may have their own expectations about the outcome of a case before it’s been analyzed that can contribute to examination bias. For example, an examiner could conclude that since a sample is being sent to a federal forensic laboratory there must be an underlying reason linking the suspect and the handwriting. In this scenario, the examiner has an expectation that the evidence submitted is inculpatory before it’s even reviewed.
5) The Laboratory’s Organization and Culture
Creating a positive experience and perception for employees within a lab is multifaceted. A constructive lab environment includes open communication, appropriate training, qualified trainers, robust quality assurance, external accreditation, continued professional development opportunities, a well-designed laboratory, promoting positive error culture, and transparent management.  While this list is not exhaustive, these factors go a long way in creating a healthy workplace culture. For labs failing to foster a supportive environment, they risk producing a sloppy work product due to lack of protocols and procedures, low morale due to lack of educational opportunities for professional growth, and the possibility of falsifying data and evidence for fear of management’s reprisals.
6) The Examiner’s Training and Motivations
Good training alone cannot shield an examiner from biasing information. However, a competence-assessed modular training program, including training in cognitive bias, could provide trainees with an awareness of this weakness in human performance. Equally important would be developing a Contextual Information Management system (CIM) limiting potentially biasing information in policy and procedure manuals. Clear, direct communication from leadership about these policies and work quality expectations also aid in promoting positive laboratory culture and better-quality work.
7) The Brain and Human Factors
As humans, we’re already predisposed to create bias simply by what we see in the world around us. Therefore, the cognitive architecture of the brain and its connection to all human factors will also contribute to forensic examination bias.
Ways to Limit Cognitive Bias in Expert Examinations
Efforts to limit contextual bias are gaining traction as a viable option for casework within laboratories. This is also a valuable step to take as an attorney contacting potential forensic experts. In working with forensic document examiners (FDE), attorneys should develop agreements outlining what information is relevant to the task at hand. This kind of planning before collaborating on a handwriting analysis project will aid in avoiding contextual bias.
In larger laboratory settings, administrative staff can also assist through Contextual Information Management (CIM) systems. A CIM can help control which information reaches the examiner and limit potential biasing information. As the study of bias within forensic handwriting examination is ongoing, information is still being gathered on many more potential sources of contextual information. This will be an area to watch for the latest innovations for future collaborations between forensics and legal professionals.
References
McClelland, J. and D. Rumelhart 2011. “An interactive activation model of context effects in letter perception: Part 1, an account of basic findings.” Psychological Review 88(2): 375; Wilson, T., and N. Brekke.1994 “Mental contamination and mental correction: Unwanted influences on judgments and evaluations.” Psychological Bulletin 116 (1):117-142.
Festinger, L. 1957 A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.
National Institute of Science and Technology (NIST) “Forensic Handwriting Examination and Human Factors: Improving the Practice through a Systems Approach��  February 2020. 30-39
The post How Experts in Forensic Document Analysis Combat Cognitive Bias appeared first on Expert Institute.
How Experts in Forensic Document Analysis Combat Cognitive Bias published first on https://namechangers.tumblr.com/
0 notes
terrypearrson · 5 years ago
Text
How Experts in Forensic Document Analysis Combat Cognitive Bias
In forensic science, subjective judgments will always play a role in how experts analyze and interpret impression evidence. Although subjective judgments are not wholly problematic, they increase the likelihood for bias when additional contextual information is introduced—even if this information is not entirely relevant to the task. This is particularly important in the forensics field, where an examiner’s findings can greatly impact the outcome of a criminal or civil case.
Forensic scientists like to believe they are immune to bias and safeguarded by the strength of their rigorous training programs. However, bias may still affect many forensic impression disciplines. Some of these disciplines include firearms and tool marks, fingerprints, documents, and tire impression evidence.
This article explores how cognitive biases can affect the decision-making process in forensic examinations and strategies that experts in forensic document analysis use to mitigate irrelevant or biasing information in handwriting examinations.
Understanding Human Biases
Human biases govern how we make judgments and decisions in our lives. But in the legal system, it’s crucial to understand how these patterns of information processing can influence forensic practitioners in their evidentiary examinations. Two main forms of bias that may affect forensic professionals include cognitive and contextual biases.
1) Cognitive Bias
Cognitive bias is a systematic pattern of deviation in human judgment. Humans are predisposed to take mental “shortcuts” when faced with complex cognitive problems. By utilizing these shortcuts, a vast amount of information can be processed quickly without taxing the brain. Cognitive bias tends to occur when a decision is made under free choice and the person is committed to their decision. When these conditions are met, individuals will actively seek information that bolsters their view or produces easily refutable findings.
Also referred to as selective attention, an example could be errors in memory that impact how or what you think about a particular task or event. This, in turn, will influence how you think about similar events. In another form of cognitive bias, selective information seeking describes a scenario where individuals seek out information that easily disconfirms alternative explanations.
2) Contextual Bias
Contextual bias is knowledge (which can be relevant or irrelevant) concerning a particular fact or circumstance relating to a case or examination that can impact objectivity.  Consider, for example, a document examiner conducting a signature examination with three possible writers. During the examination of the signature, it is revealed that one of the writer’s latent fingerprints has been matched to the document in question. Though this new piece of information is not relevant to handwriting analysis, it does present a possible influence on the examiner’s final conclusion—and the case at hand.
Exploring Biases in Forensic Handwriting Analysis
Industry governance has also taken notice of possible biases within the forensics field. In a joint effort from the National Institute of Standards and Technology (NIST), a federal agency overseeing science, innovation, and technology, and the National Institute of Justice (NIJ), the research, development, and evaluation agency of the Department of Justice, a working group was created. This group called the Expert Working Group for Human Factors in Handwriting Examination, set out to explore human factors as it relates to other elements of a system—technology, training, decisions, products, procedures, workspaces, and the overall environment— through scientific assessment detailing the effects on forensic document examinations.
The Working Group adapted a graphical model from Dr. Itiel E. Dror, Ph.D., a renowned cognitive neuroscience expert, to understand how different sources of contextual information may affect forensic casework.  Starting with the Questioned Document itself, the Working Group identified seven levels of information exposure that can lead to bias and how to help mitigate these potential negative effects.
1) The Questioned Document
The content and meaning of the written words that are being analyzed can form bias for the examiner. For example, the examiner is conducting analysis on a detailed diary of sensitive and personal information purportedly written by a suspect in custody. Bias could form after reading information concerning financial hardships in a case where the suspect is being investigated for embezzlement.
2) The Known Writing
The reference material used to compare against the questioned document could form bias when an examiner is reviewing and extracting meaning from course-of-business documents and personal correspondence (e.g. diaries, notebooks, letters, and cards).
3) All Other Information
This includes any other oral, written, or behavioral information that is not directly part of the questioned or known writing. This information could be communicated through attorneys, police officers, or through written reports of other experts, oral discussions, and exchanges by email or other means.
4) An Expectation of Outcome
Examiners may have their own expectations about the outcome of a case before it’s been analyzed that can contribute to examination bias. For example, an examiner could conclude that since a sample is being sent to a federal forensic laboratory there must be an underlying reason linking the suspect and the handwriting. In this scenario, the examiner has an expectation that the evidence submitted is inculpatory before it’s even reviewed.
5) The Laboratory’s Organization and Culture
Creating a positive experience and perception for employees within a lab is multifaceted. A constructive lab environment includes open communication, appropriate training, qualified trainers, robust quality assurance, external accreditation, continued professional development opportunities, a well-designed laboratory, promoting positive error culture, and transparent management.  While this list is not exhaustive, these factors go a long way in creating a healthy workplace culture. For labs failing to foster a supportive environment, they risk producing a sloppy work product due to lack of protocols and procedures, low morale due to lack of educational opportunities for professional growth, and the possibility of falsifying data and evidence for fear of management’s reprisals.
6) The Examiner’s Training and Motivations
Good training alone cannot shield an examiner from biasing information. However, a competence-assessed modular training program, including training in cognitive bias, could provide trainees with an awareness of this weakness in human performance. Equally important would be developing a Contextual Information Management system (CIM) limiting potentially biasing information in policy and procedure manuals. Clear, direct communication from leadership about these policies and work quality expectations also aid in promoting positive laboratory culture and better-quality work.
7) The Brain and Human Factors
As humans, we’re already predisposed to create bias simply by what we see in the world around us. Therefore, the cognitive architecture of the brain and its connection to all human factors will also contribute to forensic examination bias.
Ways to Limit Cognitive Bias in Expert Examinations
Efforts to limit contextual bias are gaining traction as a viable option for casework within laboratories. This is also a valuable step to take as an attorney contacting potential forensic experts. In working with forensic document examiners (FDE), attorneys should develop agreements outlining what information is relevant to the task at hand. This kind of planning before collaborating on a handwriting analysis project will aid in avoiding contextual bias.
In larger laboratory settings, administrative staff can also assist through Contextual Information Management (CIM) systems. A CIM can help control which information reaches the examiner and limit potential biasing information. As the study of bias within forensic handwriting examination is ongoing, information is still being gathered on many more potential sources of contextual information. This will be an area to watch for the latest innovations for future collaborations between forensics and legal professionals.
References
McClelland, J. and D. Rumelhart 2011. “An interactive activation model of context effects in letter perception: Part 1, an account of basic findings.” Psychological Review 88(2): 375; Wilson, T., and N. Brekke.1994 “Mental contamination and mental correction: Unwanted influences on judgments and evaluations.” Psychological Bulletin 116 (1):117-142.
Festinger, L. 1957 A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press.
National Institute of Science and Technology (NIST) “Forensic Handwriting Examination and Human Factors: Improving the Practice through a Systems Approach”  February 2020. 30-39
The post How Experts in Forensic Document Analysis Combat Cognitive Bias appeared first on Expert Institute.
source https://www.expertinstitute.com/resources/insights/how-experts-in-forensic-document-analysis-combat-cognitive-bias/
0 notes
smallthingswgl · 7 years ago
Text
Confirmation Bias
The Confirmation Bias
One of the most important discoveries made in the field of psychology over the past few decades is the confirmation bias. This perceptual bias refers to our tendency to search out information that confirms our preconceived notions about the world and ignore or overlook those bits of data that don’t fit our prior assumptions. The confirmation bias reflects our deep-seated habit and inclination to defend whatever we believe to be true regardless of any disconfirming evidence we might be presented with.
The reality is that in any given moment, we don’t really experience life as it is; we experience our beliefs, interpretations, and descriptions of life. We are, in a very real sense, hypnotized by our beliefs and ideologies about everything, caught up in the web of our own descriptions, living as it were inside a virtual world made of concepts, not reality. We believe our descriptions, imagining they represent reality when, in fact, they are necessarily crude approximations of this unfathomably rich, multidimensional complexity we call life.
It’s all very innocent and natural, this impulse to create explanations and develop various models of reality. It’s understandable that we humans would desire to create some sense of order, certainty, and predictability in the face of the sea of uncertainty and unpredictability we find ourselves swimming in. The problem, however (and human history illustrates this quite clearly and painfully), is that we all too often become personally and emotionally invested, imagining our conceptual maps and interpretive renderings to be true representations of reality. And it is right there where we can see the power of the confirmation bias at work, making it that much harder for us to see beyond our cherished notions, frameworks, and explanatory models, hesitant to truly open ourselves up and consider the possibility that things may be far richer and far more nuanced and multi-dimensional than we’ve imagined them to be.
It’s fairly easy to see the ways in which the confirmation bias has served to perpetuate such things as racial and gender stereotyping and prejudice. Its powerful role in human conflict is also quite clear. Whether religious, political, or ideological in nature, our resistance to having our points of view challenged seems to know no bounds. But not only do we cling to and defend our viewpoints regarding the world, politics, religion, and so forth—we also subscribe to all manner of beliefs and ideas about our own subjective experience, taking for granted that the words we use to describe what appears are somehow “true” characterizations of whatever may be occurring.
Let’s take the term “tired.” Tired is really an abstraction, a conceptual rendering of what is essentially a momentary, fleeting set of perceptions and sensations. The use of any word including “tired” represents an understandable yet ultimately futile attempt to capture in language the myriad textures and patterns that constitute human experience. To be sure, the use of language can function at one level to help distinguish one type of patterning of life energy (“tired”) from another (such as “exhilarated”). But if we investigate any experience, in this example feeling tired, we’ll find that “tired” isn’t exactly what we imagine it to be. Beyond the label we give it, beyond the verbal descriptor, what exactly is this flow of experience we describe as “tired”? Can we really say? When we inquire into it experientially, when we dive directly into the raw energy of this thing called “being tired” rather than reflexively referring to the conceptual label to tell us what the experience is, we are left with something far less definite, something in fact quite elusive, a set of experiential
phenomena that, while totally present and undeniable, are at the same time impossible to grasp hold of or pin down definitively. To be sure, things appear, and we have descriptive labels that we use to refer to them. But the reality is that each momentary perception utterly transcends any effort to define or characterize it.
Another way to understand this is that every moment is both conceptual and nonconceptual in nature. Everything that is experienced has its descriptive label, such as “tired,” “fearful,” “happy,” or “anxious,” on the one hand and yet at the same time, each of these phenomena is, in fact, utterly beyond our capacity to describe it fully. Put another way, we could say that every experiential phenomenon has two aspects—its sheer existence or presence and its description. At a descriptive level, such things as fear, tiredness, and joy certainly exist. However, these things are, at the same time, unknowable, beyond any possibility of being fully captured descriptively. In other words, we can never quite get to the bottom of what things are.
Now this may all sound terribly abstract and lacking in any sort of practical relevance to our lives. But consider this—the very states such as fear, sorrow, insecurity, anxiety, discomfort, and uncertainty that have plagued and tormented human beings for millennia are not, in fact, merely what they appear to be. We’ve imagined that these momentary flashes of experiencing labeled as fear or anxiety require some remedy, fix, or cure. But what fuels this persistent view is our belief in the substantive nature of such states, the idea that they are actually “things” that can harm us. And this belief is a direct product of the ways in which such experiential patterns are characterized, conceptually and linguistically, including the myriad stories we layer on top of such phenomena, most notably the idea that such states are problematic in the first place!
Because we have words and definitions for things and experiences, we imagine that we know definitively what these actually are:
“Oh yes, I know what tired is. It’s, well...a feeling.” “Okay, but what is a feeling?” “Well, it’s a set of distinct sensations in the body.” “All right then, but what is a sensation?” “Well...hmm...I’m not exactly sure. It’s kind of hard to define or describe, actually...” “Exactly!”
At one level, reality is precisely what we say it is. Tired is tired. Fear is fear. Happiness is happiness, and so on. At the same time, our experiences are forever transcending any effort we might make to explain, define, or otherwise characterize them. And so we find ourselves smack dab in the middle of this beautiful, awe-inspiring paradox—we know what things are on the one hand (their descriptive labels), and yet we also don’t know what they are, for everything is inherently uncertain and indeterminate. All that exists is pure, wide-open, ungraspable mystery, through and through.
Understandably, we cling to our explanatory models and labels, falling prey to the confirmation bias, because they give us some sense of safety, security, and certainty. It can no doubt be a somewhat scary, even terrifying proposition to consider that we could allow our points of view to be held lightly as frames of reference, rather than absolute truths existing in their own right.
PRACTICE
For a moment, consider that despite all our seeming knowledge, all the mental interpretations we habitually bring to the moments of our lives, the ideas, beliefs, opinions, speculations, philosophies, conceptual maps and models, we don’t actually know what experience is, at least not definitively (which is a euphemism for not at all)!
Experience is simply too complex to be understood by the thoughts that are generated about it; there is just too much detail, too much information, too much complexity, too much nuance, too much subtlety to accurately render this moment via the mechanism of thought.
Let yourself feel the truth of this unknowability, the impossibility of being able to say precisely what this moment of experience actually is.
Let yourself feel the ways in which your thoughts about what is occurring fail to capture the inconceivable depth that is actually present.
For a brief moment, just let all your knowledge, all the definitions, all the mental maps and interpretations fall away. The seemingly firm, predictable ground of knowing that you once imagined could be counted on—just let that give way, for it can’t really hold you or this moment anyway.
The concepts, definitions, and descriptions we bring to this are partial at best. Of course we long to know what this is, to stand upon the firm ground of our thoughts about reality. And yet the truth is that we’re in an exhilarating free fall of indeterminacy and multidimensionality, never quite landing anywhere firm.
And yet right there is our greatest liberation, the freedom from any and all fixed frameworks. Our labels, definitions, and conceptualizations may have brought us an imagined security. But the true security, the true ground, is really no ground at all. This is a groundless ground, every miraculous instant known yet unfathomable, experienced yet ungraspable.
- Dr. John Astin
0 notes
pricelessmomentblog · 7 years ago
Text
Book Club: The Enigma of Reason (April 2018)
youtube
This month we read The Enigma of Reason.
In the book, Hugo Mercier and Dan Sperber consider a double enigma:
If reason is so valuable, such a boon to our cognition, why did it evolve only in human beings?
Second, if reason is supposed to be so good, why are we so bad at it? Why do we suffer from so many cognitive biases and mental illusions that undermine its ideal power?
This book tries to explain this double enigma. Importantly it turns over thousands of years of assumptions about the role reason plays in human affairs and what we should expect it to be for. Far from just a philosophical treatise, the book also has major implications for how you can use your reason to think better, and the answer may not be what you expect!
If you would like to stream audio on your browser, click here listen on Soundcloud
The first idea I discuss is the claim that reason is the superpower that we claim it to be…
I’m sure I don’t have to tell you that over the last 30 years, psychologists have been discovering that we deviate from rationality, that we are not the perfect creatures of reason that we like to think that we are. Consider the confirmation bias: where we seek out evidence that supports our pre-existing opinions even when disconfirming evidence is the thing we need to form truer beliefs… In the words of Dan Ariely, we are “Predictably Irrational.” [See my discussion on that book here.]
We don’t use reason for the task it was evolved for, argue the authors, but rather, we are mis-ascribing what the real purpose of reason is. To explain further…
It has a specific domain that it operates in. It is not a cognitive superpower but rather, reason has the input of taking reasons (justifications or explanations given) and evaluating whether or not they are good. In doing so, it is also providing intuitive judgement.
Reason versus reasons — what’s the difference?
Reason as a faculty (meaning the ability to make deductions, the thing that we do when we are being logical, we do when use critical thinking) and reasons (meaning a certain type of linguistic or maybe a representational kind of object that gives a justification for something) have been treated quite differently both in psychology and philosophy, so, even though it may seem obvious to pair the two, what Sperber and Mercier are arguing is that this is not how it has traditionally been looked at.
Despite the similarities of the words, they do refer to quite different concepts. So, ‘Reason’ is this general faculty, it’s an ability we have to make correct decisions about things, but reasons on the other hand, are usually sentences that involve a proposition and then a justification (i.e. “because”) with it. What the argue is that Reason takes reasons into account when evaluating that statement.
Reason is a specialized function…
It doesn’t exist in other animals because other animals don’t need it. They can come up with correct beliefs and decisions about things by only using the inferential model. Again, reason itself is not generating these beliefs. It is delegating itself to those modules so the purpose of reason, is an add-on that is used to communicate why you might want to do something in order to justify your behavior, why you want to do something or why you think something is correct — to other people. Reason is not there to do the calculations; it is there to communicate. In fact, it’s mostly for social consumption.
When the tail wags the dog…
This is the idea that although we believe we are in charge what we are really doing is explaining our behavior after the fact. But rather than reason being a distinct process, a uniquely human process, the center of the soul if you will, if you see it as just being a specific module and, just like all the other modules, it has inferential processing and is itself opaque, we understand that these are just linguistic constructs. So reason is simply coming here to offer an answer as to why we do what we do. In fact, we often don’t have access to the true motivations of our own behavior.
What does this mean for you?
If we properly view the domain of reason to be social, it means that correct beliefs and correct assumptions, are rarely going to come from solitary thinking. Rather, discussion and debate are the proper domain of reasoning. The cooperative process is probably the true situation of reason and really it shines. This also suggests that to be smarter, you want to find the right environments. You want to be where discussions about things are happening and you have intelligent consumers of the reasons and justifications that you give and you have people who are themselves suggesting good reasons so that when you evaluate them you can come at better decisions.
Conclusions…
This book should also have us rethinking what we are. If you view reason as being just like any module and having quite specialized cognitive function, I think that it no longer makes sense to view us as rational creatures. Rather it’s proper to identify that we (if there is such a thing) should encompass all of these things and not just this reasoning impulse. Rather, each person presents their side and we as an audience evaluate them.
Click here to watch the Book Club wrap up on YouTube, here, to listen to it on Soundcloud, or here, for iTunes.
Feel free to join in on our Facebook Group Discussion I’d love to discuss this book with you there. For May, we’ll be reading The Selfish Gene by Richard Dawkins.
Book Club: The Enigma of Reason (April 2018) syndicated from https://pricelessmomentweb.wordpress.com/
0 notes
sunshineweb · 8 years ago
Text
Latticework of Mental Models: Echo Chamber Effect
A few weeks back I was reading a report penned by Amay Hattangadi and Swanand Kelkar from Morgan Stanley. In that report, I came across a very intriguing word called “Echo Chamber”. The authors wrote –
The most telling reaction post Brexit was from a London based friend who apart from lamenting the outcome went on to say that he didn’t know of a single person who was likely to have voted “Leave” and hence felt that the outcome was rigged. This is what we called the “echo chamber” in one of our earlier essays. We tend to be surrounded by people who are like us and share our world view. Social media accentuates this by tailoring our news and opinion feeds to match our pre-set views. To avoid falling into this homogeneity trap, one needs to seek out and dispassionately engage with people whose views differ from your own and that’s true not just for current affairs but your favourite stocks as well.
The word ‘echo chamber’ painted such a vivid picture in my mind that I decided to give it a permanent place in my mental attic. Echo chamber has thus become an important node in my latticework of mental models.
Echo chamber effect feeds on a fundamental cognitive error called confirmation bias (sometimes referred to as commitment and consistency bias). Famous psychologist Robert Cialdini has written about this bias extensively in his seminal book, Influence – Psychology of Persuasion.
Just like every other human bias, the roots of confirmation bias can be traced by stepping into the jurisdiction of evolutionary biology.
Millions of years of evolution has wired the human brain to shun the inconsistencies in the environment – either by avoiding or by resolving. It’s extremely hard for us to harbour two conflicting pieces of information in our mind at the same time. This mental discomfort created by entertaining two or more contradictory arguments is known as cognitive dissonance.
The three-pound grey matter inside our skull has been designed by mother nature in such a manner that it’s optimized to conserve energy by getting rid of (by hook or by crook) cognitive dissonance. Even if it means being delusional.
Charlie Munger calls this Inconsistency Avoidance Tendency. The result of this tendency is what psychologists have termed as confirmation bias.
Rolf Dobelli, in his book The Art of Thinking Clearly, writes –
The confirmation bias is the mother of all misconceptions. It is the tendency to interpret new information so that it becomes compatible with our existing theories, beliefs and convictions. In other words, we filter out any new information that contradicts our existing views (‘disconfirming evidence’). This is a dangerous practice.
Image Source: hamiltonsthoughts.com
Confirmation bias manifests itself in our behaviour by making us seek those things which validate our prior beliefs. We tend to hang out with those people who agree with our views. We selectively watch those news channels which bolster our existing political inclinations. At the same time, when we come across a contradictory piece of information, our mind tends to either ignore it or call it wrong.
The biggest danger with confirmation bias is that, although it starts small, it compounds very rapidly. As we unconsciously construct an environment which is devoid of any conflicts and contradictory information, we get embedded deeply into our cocoon of beliefs. Beliefs that are prejudiced about how the world works.
There’s a vicious cycle at work here. The echo created by constant reinforcement and repetition of the same ideas inside our mental chamber turns us into someone who knows only one side of the argument. Modern information technology and ease of access to information has further exacerbated this problem.
In their article in livemint, Swanand and Amay write –
..social media systematically finds ways to ensure that we are fed with more of what we find appealing. Our Facebook feed is filtered based on previous history of “likes”. Amazon suggests books to buy based on our pattern of previous purchases. Twitter suggests whose tweets we should “follow” based on those we are already following. The online world has magnified the decibel level of the reverberations in an echo chamber manifold.
The positive feedback loop amplifies the effect and results in a mind that can believe in anything no matter how implausible or irrational.
Organized religions and cults have been the biggest beneficiaries of echo chamber effect. People of same religion flock together, share same myths and have the same world view.
For centuries, the term black swan was used as a metaphor for something which didn’t exist or something impossible. People believed that all swans were white. No one had seen a black swan before and every time someone spotted a white swan they would cite that as an evidence to confirm their hypothesis i.e., all swans are white.
However, one single observation to the contrary invalidated a belief derived from millennia of confirmatory sightings of millions of white swans, write Amay and Swanand. “But unfortunately, that is not the way we typically function. We do quite the opposite, which is to form our view and then spend the rest of the day finding all the information that agrees with our view.”
The best armour against confirmation bias is to actively look for disconfirming evidence. The best way to arrive at truth is the process of eliminating what’s untrue. This is known as the process of falsification.
The father of evolutionary biology, Charles Darwin, was known to practice this diligently. Whenever he encountered an observation which did not fit his hypothesis, he would immediately make a note of it. He was aware that his brain would conveniently forget about exceptions if he didn’t take extra care in acknowledging and capturing it.
A year, according to Charlie Munger, is a wasted year if you haven’t destroyed one of your most cherished ideas. He likes to say –
We all are learning, modifying, or destroying ideas all the time. Rapid destruction of your ideas when the time is right is one of the most valuable qualities you can acquire.
On another occasion Charlie said –
Ask yourself what are the arguments on the other side. It’s bad to have an opinion you’re proud of if you can’t state the arguments for the other side better than your opponents. This is a great mental discipline.
As an investor, another important trick to avoid confirmation bias is to not talk about your investment ideas in public. Many successful investors, including Mohnish Pabrai and Guy Spier, follow this principle. They understand that the more they discuss and defend their investments in public, harder they pound it back into their own head, and tougher it is to change their opinion later.
Guarding against confirmation bias doesn’t mean that one remains indecisive. Making a decision under the spell of confirmation bias and going ahead with a decision in spite of disagreeing with it, are two different things. For that matter, you don’t always need all the evidence and agreement to make a decision.
Warren Buffett and Charlie Munger have starkly different personalities, plus both are ruthlessly independent in their own thought processes. It’s unlikely that they would have never disagreed with each in their half a century of partnership. But that hasn’t stopped either of them in making decisions despite the disagreement.
If you’ve attended our annual meetings, says Buffett, “you know Charlie has a wide-ranging brilliance, a prodigious memory, and some firm opinions. I’m not exactly wishy-washy myself, and we sometimes don’t agree. In 56 years, however, we’ve never had an argument.”
Disagreeing and having an argument about who’s right are two very different things. Argument stalls the decision making. Disagreeing doesn’t.
In his 2016 letter to shareholders, while talking about high-velocity decision making, Amazon’s CEO Jeff Bezos wrote –
If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?”… I disagree and commit all the time. We recently greenlit a particular Amazon Studios original. I told the team my view: debatable whether it would be interesting enough, complicated to produce, the business terms aren’t that good, and we have lots of other opportunities. They had a completely different opinion and wanted to go ahead. I wrote back right away with “I disagree and commit and hope it becomes the most watched thing we’ve ever made.” Consider how much slower this decision cycle would have been if the team had actually had to convince me rather than simply get my commitment.
Note what this example is not: it’s not me thinking to myself “well, these guys are wrong and missing the point, but this isn’t worth me chasing.” It’s a genuine disagreement of opinion, a candid expression of my view, a chance for the team to weigh my view, and a quick, sincere commitment to go their way.
Warren Buffett once wrote –
What the human being is best at doing, is interpreting all new information so that their prior conclusions remain intact.
That’s why you need a devil’s advocate who can challenge your assumptions. Someone who can ask uncomfortable questions.
As an investor, it’s very important to have your own small group of intellectual peers to bounce your ideas. But be careful in selecting these folks lest your sounding board turns into an echo chamber for that would not only be futile but outright dangerous for your decision-making process.
The post Latticework of Mental Models: Echo Chamber Effect appeared first on Safal Niveshak.
Latticework of Mental Models: Echo Chamber Effect published first on http://ift.tt/2sCRXMW
0 notes
heliosfinance · 8 years ago
Text
Latticework of Mental Models: Echo Chamber Effect
A few weeks back I was reading a report penned by Amay Hattangadi and Swanand Kelkar from Morgan Stanley. In that report, I came across a very intriguing word called “Echo Chamber”. The authors wrote –
The most telling reaction post Brexit was from a London based friend who apart from lamenting the outcome went on to say that he didn’t know of a single person who was likely to have voted “Leave” and hence felt that the outcome was rigged. This is what we called the “echo chamber” in one of our earlier essays. We tend to be surrounded by people who are like us and share our world view. Social media accentuates this by tailoring our news and opinion feeds to match our pre-set views. To avoid falling into this homogeneity trap, one needs to seek out and dispassionately engage with people whose views differ from your own and that’s true not just for current affairs but your favourite stocks as well.
The word ‘echo chamber’ painted such a vivid picture in my mind that I decided to give it a permanent place in my mental attic. Echo chamber has thus become an important node in my latticework of mental models.
Echo chamber effect feeds on a fundamental cognitive error called confirmation bias (sometimes referred to as commitment and consistency bias). Famous psychologist Robert Cialdini has written about this bias extensively in his seminal book, Influence – Psychology of Persuasion.
Just like every other human bias, the roots of confirmation bias can be traced by stepping into the jurisdiction of evolutionary biology.
Millions of years of evolution has wired the human brain to shun the inconsistencies in the environment – either by avoiding or by resolving. It’s extremely hard for us to harbour two conflicting pieces of information in our mind at the same time. This mental discomfort created by entertaining two or more contradictory arguments is known as cognitive dissonance.
The three-pound grey matter inside our skull has been designed by mother nature in such a manner that it’s optimized to conserve energy by getting rid of (by hook or by crook) cognitive dissonance. Even if it means being delusional.
Charlie Munger calls this Inconsistency Avoidance Tendency. The result of this tendency is what psychologists have termed as confirmation bias.
Rolf Dobelli, in his book The Art of Thinking Clearly, writes –
The confirmation bias is the mother of all misconceptions. It is the tendency to interpret new information so that it becomes compatible with our existing theories, beliefs and convictions. In other words, we filter out any new information that contradicts our existing views (‘disconfirming evidence’). This is a dangerous practice.
Image Source: hamiltonsthoughts.com
Confirmation bias manifests itself in our behaviour by making us seek those things which validate our prior beliefs. We tend to hang out with those people who agree with our views. We selectively watch those news channels which bolster our existing political inclinations. At the same time, when we come across a contradictory piece of information, our mind tends to either ignore it or call it wrong.
The biggest danger with confirmation bias is that, although it starts small, it compounds very rapidly. As we unconsciously construct an environment which is devoid of any conflicts and contradictory information, we get embedded deeply into our cocoon of beliefs. Beliefs that are prejudiced about how the world works.
There’s a vicious cycle at work here. The echo created by constant reinforcement and repetition of the same ideas inside our mental chamber turns us into someone who knows only one side of the argument. Modern information technology and ease of access to information has further exacerbated this problem.
In their article in livemint, Swanand and Amay write –
..social media systematically finds ways to ensure that we are fed with more of what we find appealing. Our Facebook feed is filtered based on previous history of “likes”. Amazon suggests books to buy based on our pattern of previous purchases. Twitter suggests whose tweets we should “follow” based on those we are already following. The online world has magnified the decibel level of the reverberations in an echo chamber manifold.
The positive feedback loop amplifies the effect and results in a mind that can believe in anything no matter how implausible or irrational.
Organized religions and cults have been the biggest beneficiaries of echo chamber effect. People of same religion flock together, share same myths and have the same world view.
For centuries, the term black swan was used as a metaphor for something which didn’t exist or something impossible. People believed that all swans were white. No one had seen a black swan before and every time someone spotted a white swan they would cite that as an evidence to confirm their hypothesis i.e., all swans are white.
However, one single observation to the contrary invalidated a belief derived from millennia of confirmatory sightings of millions of white swans, write Amay and Swanand. “But unfortunately, that is not the way we typically function. We do quite the opposite, which is to form our view and then spend the rest of the day finding all the information that agrees with our view.”
The best armour against confirmation bias is to actively look for disconfirming evidence. The best way to arrive at truth is the process of eliminating what’s untrue. This is known as the process of falsification.
The father of evolutionary biology, Charles Darwin, was known to practice this diligently. Whenever he encountered an observation which did not fit his hypothesis, he would immediately make a note of it. He was aware that his brain would conveniently forget about exceptions if he didn’t take extra care in acknowledging and capturing it.
A year, according to Charlie Munger, is a wasted year if you haven’t destroyed one of your most cherished ideas. He likes to say –
We all are learning, modifying, or destroying ideas all the time. Rapid destruction of your ideas when the time is right is one of the most valuable qualities you can acquire.
On another occasion Charlie said –
Ask yourself what are the arguments on the other side. It’s bad to have an opinion you’re proud of if you can’t state the arguments for the other side better than your opponents. This is a great mental discipline.
As an investor, another important trick to avoid confirmation bias is to not talk about your investment ideas in public. Many successful investors, including Mohnish Pabrai and Guy Spier, follow this principle. They understand that the more they discuss and defend their investments in public, harder they pound it back into their own head, and tougher it is to change their opinion later.
Guarding against confirmation bias doesn’t mean that one remains indecisive. Making a decision under the spell of confirmation bias and going ahead with a decision in spite of disagreeing with it, are two different things. For that matter, you don’t always need all the evidence and agreement to make a decision.
Warren Buffett and Charlie Munger have starkly different personalities, plus both are ruthlessly independent in their own thought processes. It’s unlikely that they would have never disagreed with each in their half a century of partnership. But that hasn’t stopped either of them in making decisions despite the disagreement.
If you’ve attended our annual meetings, says Buffett, “you know Charlie has a wide-ranging brilliance, a prodigious memory, and some firm opinions. I’m not exactly wishy-washy myself, and we sometimes don’t agree. In 56 years, however, we’ve never had an argument.”
Disagreeing and having an argument about who’s right are two very different things. Argument stalls the decision making. Disagreeing doesn’t.
In his 2016 letter to shareholders, while talking about high-velocity decision making, Amazon’s CEO Jeff Bezos wrote –
If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?”… I disagree and commit all the time. We recently greenlit a particular Amazon Studios original. I told the team my view: debatable whether it would be interesting enough, complicated to produce, the business terms aren’t that good, and we have lots of other opportunities. They had a completely different opinion and wanted to go ahead. I wrote back right away with “I disagree and commit and hope it becomes the most watched thing we’ve ever made.” Consider how much slower this decision cycle would have been if the team had actually had to convince me rather than simply get my commitment.
Note what this example is not: it’s not me thinking to myself “well, these guys are wrong and missing the point, but this isn’t worth me chasing.” It’s a genuine disagreement of opinion, a candid expression of my view, a chance for the team to weigh my view, and a quick, sincere commitment to go their way.
Warren Buffett once wrote –
What the human being is best at doing, is interpreting all new information so that their prior conclusions remain intact.
That’s why you need a devil’s advocate who can challenge your assumptions. Someone who can ask uncomfortable questions.
As an investor, it’s very important to have your own small group of intellectual peers to bounce your ideas. But be careful in selecting these folks lest your sounding board turns into an echo chamber for that would not only be futile but outright dangerous for your decision-making process.
The post Latticework of Mental Models: Echo Chamber Effect appeared first on Safal Niveshak.
Latticework of Mental Models: Echo Chamber Effect published first on http://ift.tt/2ljLF4B
0 notes
heilpraktiker-hannover · 8 years ago
Text
SCIENCE SHOCK: Almost all medical studies are “bogus” … reproducibility approaches ZERO
SCIENCE SHOCK: Almost all medical studies are “bogus” … reproducibility approaches ZERO by Mike Adams , naturalnews.com May 8, 2017 ( Natural News ) If you’ve ever wanted to read large collections of fake news, look no further than medical science journals  such as  The Lancet or the  British Medical Journal . Almost everything they publish is “bogus,” explains science writer Richard Harris, who writes for NPR, and the result is billions of dollars in fraud, waste and unnecessary expenditures on Big Pharma drugs that simple don’t work. His new book is called  Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions , and it reveals the truth about science fraud that I’ve been documenting for years on Natural News and Scientific.news . What truth is that? Most “science” studies don’t hold up under scrutiny , and most of them can’t be reproduced when someone else attempts to perform the same experiment. The entire apparatus of Big Pharma and FDA approved drugs, in other words, is largely fraudulent. Most drugs simply don’t work on most people, which is exactly why people who take lots of prescription medications almost always see a decline in their health (they get sicker, not healthier). The bogus nature of bad science is present across every realm of science, by the way: Pharmaceuticals, climate change, psychology, physics, chemistry and more. Climate change is probably the worst of all these, where scientists decide in advance what outcomes they want to see, then work diligently to shape the computer modeling or data sets to achieve the outcome they want. In fact, the entire industry of “climate change science” is rooted in total junk science fraud , buoyed by fabricated science and aggressive demands of social conformity (plus coordinated attempts to smear and discredit any scientists who don’t kowtow to the science totalitarians). What’s especially notable in all this is that medical science studies are almost always given instant credibility of “real news” even when they are demonstrably fake. Thus, Google’s war on “fake news” will censor information that questions the legitimacy of the faked science studies because Google automatically assumes all medical journals are 100% real and correct at all times. The far more accurate answer is that they are probably only accurate about 1% of the time, which means they’re 99% fake news that Google scores as 100% real. (See more news about Google’s disinformation schemes at Disinfo.news .) The New York Post has published an outstanding article summarizing the sad state of science today. Read the original article at the New York Post website : Medical studies are almost always bogus By Susannah Cahalan How many times have you encountered a study — on, say, weight loss — that trumpeted one fad, only to see another study discrediting it a week later? That’s because many medical studies are junk. It’s an open secret in the research community, and it even has a name: “the reproducibility crisis.” For any study to have legitimacy, it must be replicated, yet only half of medical studies celebrated in newspapers hold water under serious follow-up scrutiny — and about two-thirds of the “sexiest” cutting-edge reports, including the discovery of new genes linked to obesity or mental illness, are later “disconfirmed.” Though erring is a key part of the scientific process, this level of failure slows scientific progress, wastes time and resources and costs taxpayers excesses of $28 billion a year, writes NPR science correspondent Richard Harris in his book “ Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions ” (Basic Books). “When you read something, take it with a grain of salt,” Harris tells The Post. “Even the best science can be misleading, and often what you’re reading is not the best science.” Take one particularly enraging example: For many years research on breast cancer was conducted on misidentified melanoma cells, which means that thousands of papers published in credible scientific journals were actually studying the wrong cancer. “It’s impossible to know how much this sloppy use of the wrong cells has set back research into breast cancer,” writes Harris. Another study claimed to have invented a blood test that could detect ovarian cancer — which would mean much earlier diagnosis. The research was hailed as a major breakthrough on morning shows and in newspapers. Further scrutiny, though, revealed the only reason the blood test “worked” was because the researchers tested the two batches on two separate days — all the women with ovarian cancer on one day, and without the disease the next. Instead of measuring the differences in the cancer, the blood test had, in fact, measured the day-to-day differences in the machine. So why are so many tests bogus? Harris has some thoughts. For one, science is hard. Everything from unconscious bias — the way researchers see their data through the rosy lens of their own theses — to the types of beaker they use or the bedding that they keep mice in can cloud results and derail reproducibility. Then there is the funding issue. During the heyday of the late ’90s and early aughts, research funding increased until Congress decided to hold funding flat for the next decade, creating an atmosphere of intense, some would say unhealthy, competition among research scientists. Now only 17 percent of grants get funded (compared to a third three decades ago). Add this to the truly terrible job market for post-docs — only 21 percent land tenure track jobs — and there is a greater incentive to publish splashy counterintuitive studies, which have a higher likelihood of being wrong, writes Harris. One effect of this “pressure to publish” situation is intentional data manipulation, where scientists cherry-pick the information that supports a hypothesis while ignoring the data that doesn’t — an all too common problem in academic research, writes Harris. “There’s a constant scramble for research dollars. Promotions and tenure depend on making splashy discoveries. There are big rewards for being first, even if the work ultimately fails the test of time,” writes Harris. Read the rest of this story at the New York Post website . Original Page: http://www.naturalnews.com/2017-05-08-science-shock-almost-all-medical-studies-are-bogus-reproducibility-approaches-zero.html Shared from Pocket
0 notes
sunshineweb · 8 years ago
Text
Latticework of Mental Models: Echo Chamber Effect
A few weeks back I was reading a report penned by Amay Hattangadi and Swanand Kelkar from Morgan Stanley. In that report, I came across a very intriguing word called “Echo Chamber”. The authors wrote –
The most telling reaction post Brexit was from a London based friend who apart from lamenting the outcome went on to say that he didn’t know of a single person who was likely to have voted “Leave” and hence felt that the outcome was rigged. This is what we called the “echo chamber” in one of our earlier essays. We tend to be surrounded by people who are like us and share our world view. Social media accentuates this by tailoring our news and opinion feeds to match our pre-set views. To avoid falling into this homogeneity trap, one needs to seek out and dispassionately engage with people whose views differ from your own and that’s true not just for current affairs but your favourite stocks as well.
The word ‘echo chamber’ painted such a vivid picture in my mind that I decided to give it a permanent place in my mental attic. Echo chamber has thus become an important node in my latticework of mental models.
Echo chamber effect feeds on a fundamental cognitive error called confirmation bias (sometimes referred to as commitment and consistency bias). Famous psychologist Robert Cialdini has written about this bias extensively in his seminal book, Influence – Psychology of Persuasion.
Just like every other human bias, the roots of confirmation bias can be traced by stepping into the jurisdiction of evolutionary biology.
Millions of years of evolution has wired the human brain to shun the inconsistencies in the environment – either by avoiding or by resolving. It’s extremely hard for us to harbour two conflicting pieces of information in our mind at the same time. This mental discomfort created by entertaining two or more contradictory arguments is known as cognitive dissonance.
The three-pound grey matter inside our skull has been designed by mother nature in such a manner that it’s optimized to conserve energy by getting rid of (by hook or by crook) cognitive dissonance. Even if it means being delusional.
Charlie Munger calls this Inconsistency Avoidance Tendency. The result of this tendency is what psychologists have termed as confirmation bias.
Rolf Dobelli, in his book The Art of Thinking Clearly, writes –
The confirmation bias is the mother of all misconceptions. It is the tendency to interpret new information so that it becomes compatible with our existing theories, beliefs and convictions. In other words, we filter out any new information that contradicts our existing views (‘disconfirming evidence’). This is a dangerous practice.
Image Source: hamiltonsthoughts.com
Confirmation bias manifests itself in our behaviour by making us seek those things which validate our prior beliefs. We tend to hang out with those people who agree with our views. We selectively watch those news channels which bolster our existing political inclinations. At the same time, when we come across a contradictory piece of information, our mind tends to either ignore it or call it wrong.
The biggest danger with confirmation bias is that, although it starts small, it compounds very rapidly. As we unconsciously construct an environment which is devoid of any conflicts and contradictory information, we get embedded deeply into our cocoon of beliefs. Beliefs that are prejudiced about how the world works.
There’s a vicious cycle at work here. The echo created by constant reinforcement and repetition of the same ideas inside our mental chamber turns us into someone who knows only one side of the argument. Modern information technology and ease of access to information has further exacerbated this problem.
In their article in livemint, Swanand and Amay write –
..social media systematically finds ways to ensure that we are fed with more of what we find appealing. Our Facebook feed is filtered based on previous history of “likes”. Amazon suggests books to buy based on our pattern of previous purchases. Twitter suggests whose tweets we should “follow” based on those we are already following. The online world has magnified the decibel level of the reverberations in an echo chamber manifold.
The positive feedback loop amplifies the effect and results in a mind that can believe in anything no matter how implausible or irrational.
Organized religions and cults have been the biggest beneficiaries of echo chamber effect. People of same religion flock together, share same myths and have the same world view.
For centuries, the term black swan was used as a metaphor for something which didn’t exist or something impossible. People believed that all swans were white. No one had seen a black swan before and every time someone spotted a white swan they would cite that as an evidence to confirm their hypothesis i.e., all swans are white.
However, one single observation to the contrary invalidated a belief derived from millennia of confirmatory sightings of millions of white swans, write Amay and Swanand. “But unfortunately, that is not the way we typically function. We do quite the opposite, which is to form our view and then spend the rest of the day finding all the information that agrees with our view.”
The best armour against confirmation bias is to actively look for disconfirming evidence. The best way to arrive at truth is the process of eliminating what’s untrue. This is known as the process of falsification.
The father of evolutionary biology, Charles Darwin, was known to practice this diligently. Whenever he encountered an observation which did not fit his hypothesis, he would immediately make a note of it. He was aware that his brain would conveniently forget about exceptions if he didn’t take extra care in acknowledging and capturing it.
A year, according to Charlie Munger, is a wasted year if you haven’t destroyed one of your most cherished ideas. He likes to say –
We all are learning, modifying, or destroying ideas all the time. Rapid destruction of your ideas when the time is right is one of the most valuable qualities you can acquire.
On another occasion Charlie said –
Ask yourself what are the arguments on the other side. It’s bad to have an opinion you’re proud of if you can’t state the arguments for the other side better than your opponents. This is a great mental discipline.
As an investor, another important trick to avoid confirmation bias is to not talk about your investment ideas in public. Many successful investors, including Mohnish Pabrai and Guy Spier, follow this principle. They understand that the more they discuss and defend their investments in public, harder they pound it back into their own head, and tougher it is to change their opinion later.
Guarding against confirmation bias doesn’t mean that one remains indecisive. Making a decision under the spell of confirmation bias and going ahead with a decision in spite of disagreeing with it, are two different things. For that matter, you don’t always need all the evidence and agreement to make a decision.
Warren Buffett and Charlie Munger have starkly different personalities, plus both are ruthlessly independent in their own thought processes. It’s unlikely that they would have never disagreed with each in their half a century of partnership. But that hasn’t stopped either of them in making decisions despite the disagreement.
If you’ve attended our annual meetings, says Buffett, “you know Charlie has a wide-ranging brilliance, a prodigious memory, and some firm opinions. I’m not exactly wishy-washy myself, and we sometimes don’t agree. In 56 years, however, we’ve never had an argument.”
Disagreeing and having an argument about who’s right are two very different things. Argument stalls the decision making. Disagreeing doesn’t.
In his 2016 letter to shareholders, while talking about high-velocity decision making, Amazon’s CEO Jeff Bezos wrote –
If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?”… I disagree and commit all the time. We recently greenlit a particular Amazon Studios original. I told the team my view: debatable whether it would be interesting enough, complicated to produce, the business terms aren’t that good, and we have lots of other opportunities. They had a completely different opinion and wanted to go ahead. I wrote back right away with “I disagree and commit and hope it becomes the most watched thing we’ve ever made.” Consider how much slower this decision cycle would have been if the team had actually had to convince me rather than simply get my commitment.
Note what this example is not: it’s not me thinking to myself “well, these guys are wrong and missing the point, but this isn’t worth me chasing.” It’s a genuine disagreement of opinion, a candid expression of my view, a chance for the team to weigh my view, and a quick, sincere commitment to go their way.
Warren Buffett once wrote –
What the human being is best at doing, is interpreting all new information so that their prior conclusions remain intact.
That’s why you need a devil’s advocate who can challenge your assumptions. Someone who can ask uncomfortable questions.
As an investor, it’s very important to have your own small group of intellectual peers to bounce your ideas. But be careful in selecting these folks lest your sounding board turns into an echo chamber for that would not only be futile but outright dangerous for your decision-making process.
The post Latticework of Mental Models: Echo Chamber Effect appeared first on Safal Niveshak.
Latticework of Mental Models: Echo Chamber Effect published first on http://ift.tt/2sCRXMW
0 notes