#rather pointing out flaws in the logic of those who would use statistics to argue for gendered hierarchy
Explore tagged Tumblr posts
Text
I've seen conservatives (no, not all), particularly the ones who draw closer to the red pill sections and trad sections of conservatism make the claim that statistically, women are more likely to be the initiators of divorce, and not men. They use this statistic as evidence against women, depending on what the complaint of the day is. Usually about how marriage just screws men over, and how they can't trust women.
And then I listen to stories of those who've divorced -either men who were able to look at their divorce fairly or women who've been in the situation- or even AITA posts of guys who are on the verge of being divorced, and most of the time, the bottom line was, "When she stops nagging, she's stopped caring, and she's already one foot out of the relationship." And usually the guy doesn't notice because he thinks the end of the nagging means everything's good. So he's blindsided by the divorce, and thinks it comes out of nowhere.
Now, of course this isn't the same for every divorce, but it really puts that previous statistic in perspective. Just became the woman walked away does not mean the fault of the failing marriage is on her.
I don't advocate for divorce, so don't take this as a defense of it. But the problem didn't start when the divorce was initiated- the problem started when the marriage was no longer invested in, and was just taken as a given.
And looking on it, it's kind of funny that games like Stardew Valley that have the mechanic that requires you to continuously interact with your spouse in order to maintain fourteen hearts somehow have a better idea of how marriage is than people actually in marriages. Overly simplified, sure, it's a game, but your relationship doesn't "lock" in place after you get married. You can't go to work, bring home the bacon, and expect your spouse to be grateful and for the two of you to be fine.
So yeah, this seems to a prevalent problem with guys, and it's good to be aware of the problems and actively try to work with their wives in order to not get to a point where the wives are just tired of being in a one person relationship. But it should serve as a cautionary tale for women too. Just because we're not the ones who do this primarily doesn't mean we should act like we could never end up doing that. That's just a highway to pride.
Furthermore, statistics like that should not be taken at face value. "Women are the fault of divorce!" Initiating the divorce, perhaps, but she may have been the one who fought for the marriage until she couldn't. There's more behind the scenes than just the statistics.
#there's less of a point to this#and more just rambling thoughts#I'm not trying to just swing the other way and say that marriages ending are men's fault#rather pointing out flaws in the logic of those who would use statistics to argue for gendered hierarchy#sure they may swing in the guys' favor in terms of values#but refusing to dig deeper and question whether or not men are actually a part of the problem is a point of pride#if you want a traditional marriage#you still have to acknowledge that both people in the marriage are human and both have different needs#both those needs need to be met
7 notes
·
View notes
Note
I love seeing actual pro life accounts call out and go after americans because they can such hypocrites. Saying to us how much they believe in protecting unborn babies and the sanctity of life but will yell scream and protest against gun control when they have their regular school shootings. Like your babies aren’t safe once out the womb, they’re getting killed in their schools, grocery stores, movie theaters and malls. But they’d rather protect gunmen over their innocent babies.
Give it a few years and we will read something like how Americans are shooting nursery wards and delivery rooms. But the American pro lifers will turn their blind eye again because muh guns
I think our voice as non American needs to be heard🩵🩵 Tumblr is a mostly USAmerican platform so our iconoclast outlook is more than relevant than ever
It's so funny when I interact with pro gun Christian USAmerican who are so used to use their amendments, "castle law" or "natural right" charade to back up their stance, they completely lose touch when they face someone like me (not American) who's absolutely unfamiliar with those concepts.
I'm french and you won't see me trying to explain how actually abortion is compatible to Christianism through the lense of the Declaration of the Human Rights (a french creation, which is btw infinitely more relevant internationally than American amendements no one cares or knows about). ...That's how demented USAmerican sound like when they try to defend the idea that gun violence is compatible with Christianism through the lense of their law/amendments
God doesn't make special treatment; He wouldn't allow USAmerican Christians to kill while every other can't.
The mental gymnastics of pro gun anti abortion Christian is also pretty funny to witness. They'll cope saying "hMuMmmm b-but we don't want to kill people uwu" yeaah so tell me why you guys are advocating to make the most lethal weapons (firearms) available for all? If the point wasn't to kill, yall would be campaigning for teaser or pepper spray. Get real.
And you're absolutely with their hypocrisy children. Look at Sandy Hook. Look how hard conservatives tried to silence us saying it was "too early" to talk about the gun problem and that we were weaponizing this tragedy (when they had no problem doing the same when a trans person attacked a Christian school 🤡); it got so bad they had to make up an entire tinfoil theory arguing that aKtcHualLy no children was dead.... Liberals are absolutely right to drag them for caring about life only when it comes to fetus and not living breathing school-attending children....
Their logic is precisely the same as abortionists : the system is flawed and hurt people....? we have to throw it away altogether instead of....you know, finding solutions to fix it.
ABORTIONISTS : "abortion is the only solution to deal with pregnancy issues!! there's no point in trying to improve the social system & healthcare so no woman feels compelled to resort to abortion"
PRO GUN: "MORE guns are the only solution against mass shooting There's no point in investing in decent system of police officer training & social prevention so that we reduce at a minimum gun violence"
Both statements are as equally deranged. Why couldn't we find solutions upstream who wouldn't involve death or physical harm?
And it doesn't help they have this obsession with acting like the government is the enemy™️ and that nothing remotely positive should be expected from it...
The fact that they think the average Joe will do better than the average policeman is painfully naive. There's a reason protecting people is a whole job. They act like once every citizen is carrying a gun, there will suddenly be the pax Americana and all the psycho who are most likely going to snap will be stopped by a good samaritan crossing over the way.... When statistically speaking, more guns around will expose more mentally unwell people to using them. Isn't there a whole epidemic of mental illness in the USA? oh my bad, it's only a problem when trans people do a mass shooting and that conservatives suddenly care about actually not putting guns into anyone's hand 🙃
#mind you those ppl want teachers to carry as well#what are they going to say once teachers start shooting their own student bc they couldn't contain shit in the classroom?#answered
8 notes
·
View notes
Note
q. 3, 16-19, 26, 28, 32, 34-36, 38, 40, 44, 51, 53, 58, 66, 73, 85, 96, 97, 100 please? (feel free to not answer, I'm aware these are way too many questions but consider! I had to choose from 100 when I just wanted to ask `em all- again no pressure to answer all of these)
TUMBLR FUCKED UP THE NUMBERS FORGIVE ME
1. Do you really think there is somebody for everybody? That presumes everyone wants someone. Which I don’t think is true. If you do want someone… well there at what? 9 billion people? Statistically your odds of finding someone are pretty good. But if you want something you can’t just want it. You have to take steps to make it happen. And that might mean compromise or action.
2. Do theoretical ethical debates have any value? Is it important people discuss ethical dilemmas, e.g. the trolley problem? I think a lot of pressing social issues started as a “theoretical ethical debate” in one circle or another. “Should we free the slaves?” was definitely a thought experiment before it was a reality. If we go real obtuse like “What if reality is a simulation, does that change anything?” etc. Shit we can’t really know. Then I still think there’s interesting stuff to pull. And philosophy has contributed a lot to different parts of society. Thinking is good.
3. Did you have imaginary friends? Do you still have them? Lol I think lots of people keep their imaginary friends in some way or another. “God” or “my roleplay blog” or “my online persona” or “my sexually dominating side” or “my OCs in my WIP” etc. etc. Whatever you call them I think its kind of natural?
4. Are you religious? Do you think your religion is ‘correct’? I was raised a polytheist. I’m a form of Pagan in practice. I believe there are lots of gods. I don’t really bother with asking if another religion is “real” or not. If someone believes something, then that’s fine. But I think you should be prepared to argue your point. People use religious belief to justify stances all the time. So you should be prepared to argue why your religion is good or right if you’re going to use “well my faith says…” as your justification.
5. If you aren’t religious, do you wish you were? Why? Religion is interesting, I could talk about it for ages, but I’m glad I’m not more religious. I think if I couldn’t take science into account when asking certain “should we/shouldn’t we?” questions that’d suck.
6. What’s the most life-changing choice you’ve made so far? Probably to transition. I transitioned after high school. Which in some ways was a blessing but I got doctors, I changed all my documentation, I picked a new name, I had to come out to all my family, my work… It made me very happy but it was huge.
7. Would you want to live forever? How about for a billion years, a million, a millennium, a century? I believe in reincarnation but I think it’s natural to be scared of death. Frankly if I could avoid the gamble I’d rather live. Even if I’m just a brain in a jar. If I can be here that’s better than being nothing.
8. Was your childhood happy? Uhhhhh….?? Um…. Well—Uh—Next question.
9. What are you missing from your life? A way to make money doing things I’m passionate about. I’m still figuring that out.
10. Have you ever met someone who had a very similar personality to your own? Did you get along? I knew someone who was very like me when I was a very different person. They were trash, I was trash, and I’m glad that 1) I changed and 2) we don’t speak anymore. I was a fucking psycho.
11. Is your life what you expected it would be five years ago? Totally different in some ways. Exactly as planned in others. I’m happy, so who cares? I made the best decisions I could with what I had at the time.
12. What makes a person ‘good’? Are you a ‘good person’? “Good” people do good things. Even if they think awful things. “I fed the poor, but I only did it to fatten my ego!” The poor don’t fucking care. They got fed. Likewise a person who has good thoughts but doesn’t act on them (aka “well I think gay folk aren’t hurting anyone, but God/my church say they’re against the bible so I guess—”) are bad people.
13. How often do you lie? Is all lying inherently bad? Are you generally truthful? I lie a lot. I think you should. I think we should stop demonizing lying. Most people don’t care about your details. They just need the gist to get the job done. And if it doesn’t involve someone, they don’t deserve to know anything. It would be truthful to tell your landlord you got a promotion, but they’re not entitled to that information. You should be as honest as you feel comfortable being. Even if that’s not very honest at all. And, yes, there are consequences. You have to deal with those.
14. What question could you ask to find out the most about a person? One question? Oh man that’s hard… probably “who do you love most?” Because do they answer themselves? Their partner? Their parents? There’s info in all the options.
15. Which beliefs do you have that is most likely to be wrong? “Eat the Rich, literally” will not actually solve anything. But I think cannibalistic revolution has been overlooked too long as a viable option.
16. Are we eventually going to ‘run out’ of new combinations for music, art, language, etc.? Is there a limit to human creativity? Never. I mean you could argue we already have. Isn’t everything just an old story repackaged? Or an old song dressed up? I think the ‘when’ and ‘where’ something is released is as powerful as ‘what’ it is.
17. How do you feel about the idea ‘an eye for an eye’?| My dear friend has a policy “do no harm, take no shit.” And I abide by that. I think you should care about people, I think all life is improved when we improve the rights/conditions of others, but if you start shit I’m going to end it.
18. Would you fight for your country? Do you feel a sense of loyalty to your nation? My country? Eeeeeh my country is flawed. My rights? My way of life? My home? Yeah, sure. It’s not about the flag.
19. Do you think you would be happier if you had been born a different gender, sexuality, race, ethnicity, nationality or religion? I think I got pretty damn lucky. My life would be MUCH harder if even one of those factors was altered. Yeah, I’m trans but it could be waaaay worse.
20. Is your perception of yourself similar or the same to how others perceive you? People are usually a lot nicer about me than I am to myself. I like that. Generally, I like myself too but it’s not always easy. More than once my relationship with my body or mind has been purely antagonistic.
21. Are you overly analytical? I guess? I have too many opinions lol. Things would be simpler if I cared less.
22. What belief do you have that isn’t logically grounded, but you still firmly believe in? I’m a Pagan. And I hate the “these essential oils cure cancer!” stuff but I definitely believe you can curse someone. And I get that’s a little crazy lol.
3 notes
·
View notes
Text
@learn-tilde-ath
Perhaps anon isn’t saying “it would be correct to view it in the opposite way” so much as “isn’t this flawed like the opposite world be”? Like, uh, what if neither coalition is particularly agentic, and, there’s not really anyone at the wheel?
To continue the grumpy post with another long, grumpy and uncharitable post (again ‘J’ key to skip),
“You can’t be racist against white people, because racism is prejudice plus power.”
By itself, that statement should be enough to realize that Social Justice is ideologically corrupt and that it shouldn’t be given power, including by listening to and following its pronouncements on who is and isn’t “racist.” I considered that obvious the first time I encountered this stuff. I’ll explain how.
The statement is incredibly racist. Or rather, because Left/Libs determine what “racist” is and have decided as a group that it isn’t racist, the statement is incredibly Alternative Racist, or alt-racist.
To end racism, it was supposed to be the case that no one was going to be allowed to, well, do racism. It’s true that “white people” “face less systemic oppression” in general. It is not true that they either never face systemic oppression, or that they never could face systemic oppression in the future. “White people” being allowed to call something “racism” is part of how they’re supposed to prevent the emergence of organized racism against them in the future, if it should come to that, and part of how they enforce the agreement to prevent the agreement’s gradual erosion. Removing the ability to declare racism just because it “isn’t needed” fundamentally disrespects the personhood of “white people” as a party to the agreement.
It is a violation of the agreement, and a pretty major one.
It doesn’t matter if “but it’s a sociological term.” We all know that “racism” in common use most certainly isn’t a sociological term, and that this redefinition is based on bad faith strategic equivocation to leverage the emotional load for the existing term “racism.”
Just about everything in Social Justice is like this.
Let’s take another example. “Lived experiences.” You aren’t supposed to dismiss “lived experiences.” I get what this is reacting against, which is also a logical fallacy - “statistically your group suffers less of this, therefore it can’t have happened to you”. However, it’s still bad epistemology. “Oppressed people” are said to have special information that overrides and is more important than statistics, and which “privileged people” fundamentally can’t understand.
Special information that outsiders can’t understand even if you tell them? That one can’t verify from observations? That sounds like a security exploit for your brain, doesn’t it?
All it takes is to apply just a small assumption of the possibility of bad faith.
“Privileged people can’t see their own privilege.”
Same deal. It’s not hard to notice. Why didn’t they?
Each one of these statements is questionable in itself, but of course they’re much worse when taken in the context of all the other statements.
Take, for instance, “all white people benefit from white supremacy.” (If you mean all white people benefit from e.g. throwing innocent black people in jail then no, that’s begging the question of their guilt. It costs almost one median national income to keep someone in jail, so unless you hate black folks already, keeping someone who wasn’t going to commit a crime penned up for the benefit of drug-planting Racist Louisiana Sheriff B. Adolf Higgins is a huge waste of money that just makes other people angry at you and your government.)
There’s no point making that kind of statement unless you’re trying to pin collective moral liability on the basis of race, presumably under the (broken) assumption that whoever you’re making the statement to can put pressure on Sheriff Higgins, even though Sheriff Higgins was already considered in the wrong at that time. It essentially presumes a racial command and control infrastructure. Going with the high-contextualizer mode that we’re supposed to apply to racism claims in order to detect subtle, hidden racism, otherwise, why even say it? It’s a claim “you’re receiving stolen goods, therefore you need to act (as we say) or you are guilty.”
But combine that with the constant talk about colonialism, conquest, and genocide, which are called “white supremacy,” and then combine that with talk about unearned “privilege.” Suddenly we have not only collective moral liability, but collective moral liability going back seventeen generations, extending to actions on multiple continents, many of which were quite bloody. Since “the benefit” is most of an entire continent, then “removing the unearned benefit” implies removing the continent, plus interest, plus an amount of suffering equivalent to every war waged by colonial European powers.
“Well they don’t really mean that,” one might say. But the thing about outwards-facing ideological rhetoric is that the next generation doesn’t get the joke.
It’s quite a gamble vs. just keeping individual moral liability and moral liability by ideological/political groups where membership is fluid. It’s also illiberal.
There are all sorts of other approaches. Free school lunches, afterschool programs, food stamps, and the like can reach the worst off, including when “worst off” is not evenly distributed, but don’t have this kind of spectacular potential failure mode. Likewise, demanding police reforms, even on the basis of being victims of racial prejudice, doesn’t require this.
Back when it came out, I thought it was just a few lunatics on the Internet. Then suddenly, it was everywhere. “Respectable,” “serious” people supported this kind of nonsense that was alt-racist against JAWs, and institutions started working this stuff into their rules. Not the full implications, of course. Just, say, hiring people on the basis of their “diversity” statements, as one UC school did, to take an example. (Or in the gender case, weakening standards of evidence for accusations of sex crimes... but in practice only against men.)
Going to Afghanistan could be interpreted as “reacting to” 9/11. Going into Iraq can’t really. Back during that era, Team Blue were “team” “science and reason” and “better than” those “dumb religious conservatives.” It was argued that they would get better outcomes because they had a better theory of knowledge, as demonstrated by not believing in the supernatural.
Then they went and gave power to an alt-racist cult that should have tripped their internal cult warnings before they even got close to it.
There was going to be a reaction to the first black President not immediately ending all racism, but deciding to attribute that to “white people” (actually JAWs, see e.g. “white-adjacent,” “people of more color”) instead of an ideological group (they have been blaming Republicans as racist for years, why not just keep doing that?) was a choice that they made.
So either they’re less cult-resistant, dumber, or more immoral than my initial judgments of them during the late Bush and early Obama administrations.
If both groups are stupid, evil, and culty, then it’s question of which stupid evil cult most suits my purposes at a given time, including which one is more threatening to me and my long-term interests.
25 notes
·
View notes
Text
Thoughts on JK Rowling's transgender debate
Recently, I read about JK Rowling’s transgender comment and the discourse around it. Insisting “sex is real,” Rowling argues that the recent transgender movement is erasing the concept of biological sex by ignoring the significance of biological factors in favor of gender which is a social, not biological construct. Rowling also insists that trans inclusive bathrooms intrude on the safety of biological women, especially those who have experienced bodily or sexual assaults. There have been several counter arguments to Rowling, most common of which are, “transwomen are women,” “statistically, trans women suffer from assaults rather than being the perpetrators of assaults” and “no one says sex isn’t real.” While I agree with all of those responses to Rowling, as an English teacher, I cannot help but notice the clever rhetorical techniques Rowling uses to frame her argument to evoke fear while appearing rational.
The core of Rowling’s rhetorical strategy lies in the combination of exaggeration (“trans people are saying sex isn’t real”), selective supporting example (“by erasing sex, biological women’s safety is endangered in trans inclusive bathroom”), and sympathetic personal anecdote (“as a victim of sexual assaults, I understand and care for women’s safety”). Let me be clear, I am not saying that line of reasoning is logical, ethical, or correct. Many have pointed out the flaws in Rowling’s argument: no transgender person denies the importance of biological sex in everything - however, gender should be the deciding factor when it comes to rights and recognition; credible research find that there is no evidence that trans inclusive bathrooms endanger women. Nevertheless, I think those counter arguments would do little to change the attitude of those who are already inclined to agree with Rowling because they probably just say something along the line of “the very notion of trans inclusive bathroom is about separating people according to gender not sex, how can you say you don’t deny sex in favor of gender,” and personal feeling of fear is more reliable than research because those research may just be “inaccurate or a part of the trans propaganda.”
I would like to understand the root of the fear that Rowling’s seemingly simple line of argument so effectively evokes, regardless of whether that fear is justified. The heart of the discourse lies in the dichotomy between sex and gender that Rowling succeeds in exaggerating and her opponents fail to elaborate to their advantage. Sex is a biological concept primarily centered on the body and bodily functions, whereas gender is a social construct that depends on performance, mostly outward expressions. For example, many transgender people are drag queens. In some ways, being a transgender involves a lot of performing certain outward appearances and (social) functions- through clothing, behavior, gender role, etc - to negate the (biological) wrongness within. This idea that sex is bodily and inward while gender is social and outward is certainly not absolute (many gender roles are performed in the private sphere, many gender aspects are performed through sexual and/or bodily acts; many bodily functions occur publicly - at least, according to anti-maskers who protest covering their faces because “I can’t breathe”). However, for the most parts, I think it is not unreasonable to generalize that biological sex operates primarily via the body and private bodily functions while gender is performed primarily via outward expressions.
Rowling’s argument would have fallen flat had she chosen a supporting example that takes place in a space whose nature is about outward performance. For example, if she had said, “trans inclusive catwalks are dangerous to biological female models,” people would probably have laughed at her. That’s because the catwalk is a space designed specifically to perform appearances and garner social recognition; no one cares about genitals or x,y chromosomes underneath the clothes on display. By contrast, a bathroom is very much where one exposes one’s private parts and performs basic bodily functions that may be even more private than sex. In a bathroom, one cannot care less about gender expressions like clothing or mannerism; instead, physical safety of the exposed body, albeit temporarily, is a real, legitimate concern.
Rowling’s effective rhetorical strategy can be summarized as such: step 1, exaggerating a simple concept (“sex is erased”) to evoke a sense of danger and urgency; any factual rebuttal (“no one says that”) will sound hollow because of step 2, selective example (trans inclusive bathroom). Most people will not notice how this narrow example is. Moreover, since most people instinctively understand the bathroom as a space for biological functions, they will take for granted the binary divide based on sex; thus, reconceptualizing it according to gender expression will automatically feel unnatural. Without explicit reason for a paradigm shift, the cry “but human rights” and “transwomen are women” will sound meaningless because some abstract concept and a semantic tautology cannot explain why a space designed for biological functions should be repurposed according to what is perceived as an irrelevant social construct. By this time, the sense of danger is validated and intensified into indignation at a perceived injustice (“biological sex is erased where it should not be, endangering bodily safety and privacy”). Finally, step 3, anecdote about sexual assault to make the feeling of danger more urgent and relateable on a personal level (“I experienced sexual assault, and you may experience it too, so you should care about this danger to your body in trans inclusive bathroom”). At this point, statistical evidence of the low possibility of assaults in trans inclusive bathroom holds little sway because if the switch from sex to gender is already unnatural, unfair, and carrying such imminent personal risk, why tolerate even a slight possibility of it?
I am not sure if exposing rhetorical strategy and logical fallacy changes any mind, especially of those of bigots. However, the debate around Rowling’s transgender comment reveals to me the need for a reframing of the transgender argument. It is worth elaborating in which context sex matters, in which gender matters, and why; if a setting like public bathroom, by nature, serves bodily functions, then what the rationales are for a paradigm shift from sex to gender; given these rationales, how they weigh against the costs (such as the discomfort of women like Rowling who experienced trauma from bodily assaults and prefer to keep their “safe spaces”) and/or any alternative; lastly, if the costs are real, how to mitigate them (this is where the statistics may come in handy, but only if one has gone through elaborating the previous questions).
In human societies, spaces have different natures and functions. Therefore, convincing people that trans people should be allowed in their prefered public bathrooms is fundamentally different from convincing people that trans people should be allowed, for example, on catwalks or in the military. A one-for-all generic argument will not suffice. In particular, the public bathroom divide based on sex or gender initially seems simple but is proven to be quite nuanced. Why should gender construct matter in a space designed for biological functions? What social and gender meaning do public bathrooms carry (if this debate proves anything, it is that even as a space designed for biological functions, not gender ones, public bathrooms carry enough social value for people to fight to preserve or reconstruct its meaning)? Between transgender people, for whom entry into public bathrooms carries social acknowledgement and confers personal worth, and opponents of trans inclusive bathrooms, for whom those bathrooms hold social meaning precisely for their binary assignment of biological bodies, who carry the burden of proof?
1 note
·
View note
Text
More on Street Epistemology
somebodysittingthereallthetime:
Sorry this turned into a bit of a rant. TLDR is: I think that all your arguements against SE work even better against your own approach. So I just don’t really understand you at this point.
Not only everything you say about SE indicates that you know close to nothing about it…
(You did not answer the question about how many actual SE conversations you’ve seen. Of course not seeing any wouldn’t make your opinion invalid, but if you just continue saying things that are simply not true, we’re both wasting our time here. For example you say that “most people are devoted to comfortable opinion rather than facts and truth. SE advocates don’t recognize that.” That is simply not true. Many SE conversations touch on the question of whether the IL believes what they believe because of it makes them feel nice or because of something else. Truth vs. comfort is a recurring question.) …but your last comment shows to me you don’t really understand the backfire effect either. “The backfire effect occurs when, in the face of contradictory evidence, established beliefs do not change but actually get stronger.” In your 5 point strategy the first 2 is also used by SE, but the last three are exactly what causes the backfire effect. The mere situation of you saying you know something better than them is what causes it. The only systematic way around that I know about is SE.
Here’s the thing. Your defense of doing it for the audience is something I kind of agree with. Public debate is different from SE. Both can be useful. But I think your total dismissal of SE is entirely baseless. Basically what you said is that people are not rational enough for SE. That’s just your impression. How is that a good arguement? Do you have any data on that? The backfire effect study is at least some data we have. But what really strikes me is that you say that your style works best while it requires reading a lot, understanding complex reasoning and so on. Are people rational enough for that? That’s a huge contradiction. Same with your line “most people are devoted to comfortable opinion rather than facts and truth. SE advocates don’t recognize that.”
So how are your facts and truth any more help than SE? All your arguements condemn your own style even more than they condemn SE.
Your lines again: “I don’t and never will have faith in people’s capacity to reason “ later you say your strategy of showing people reasoning is guaranteed to work. How is that not a contradiction?
“All it takes is one question: might I be wrong?“ Yes, I totally agree, but if you’re “slamming the truth on their table and compelling them to respect it“ That’s where the backfire effect comes in. There is a less direct way of leading people to that question, and it seems more effective if they themselves formulate the question. Just imagine the situations! You ask someone or you lead a conversation in a way that they feel comfortable asking it themselves
Peter Boghossian coined the term and made it famous, and now agnostic atheists think Street Epistemology (SE) is the only valid approach. It’s not. In fact, you mischaracterize my approach and what I said in order to drive your points. You act as though there wasn’t context to what I said, so let’s get right to it.
The accusation that I don’t understand x or y is the apologist’s strategy. You don’t care about the right approach; you only want to prove that SE is right and so the accusation is used not once but twice. I outlined SE well enough for you to say that my approach incorporates two points in the SE approach. Then you say I don’t understand the backfire effect -- even though I defined and explained it. The fact that I anticipate it and don’t care to prevent it doesn’t mean I don’t understand it. I want it to happen to the point that someone loses their cool so that onlookers can see just how helpless the view in question actually is.
If your view is demonstrably superior, there will be no need to curse someone out, threaten them, or even threaten to kill them. The fact that I’ve driven people to that point says a lot about Christianity, Islam, and right wing politics. I’ve never been driven to such a point because my beliefs don’t get stronger when confronted with contrary evidence; I update them in accordance with the evidence and that’s the norm for rational people. Look at scientists, mathematicians, engineers, and philosophers. Generally speaking, their beliefs don’t get stronger when confronted with evidence their view can’t or doesn’t account for; they adjust their beliefs when faced with such evidence. It’s how progress is made in any of those disciplines. Rational people will readily identify the effect and realize that the person guilty of it has a false belief or erred point of view; the backfire effect is a good indicator for deciding which view is right, so it’s not something I seek to prevent.
Then there’s this issue of contrary evidence. Who decides what such evidence is? This is a matter calling for objectivity, so rational individuals in an audience can say “homologies do pose a threat to the Intelligent Design view and yet, the ID advocate is doubling down and even getting testy; the guy arguing in favor of evolution is likely to be right.” Now, an ID advocate can say that Piltdown Man is contradictory evidence. Who decides that? Anyone who looks at that specific case and realizes that a) it was a hoax and b) it was shown to be a hoax by scientists and not creationists will realize that this isn’t evidence at all; that’s aside from the fact that Piltdown Man doesn’t suddenly debase the massive human fossil record we have! An audience is much more likely to have rational members than for any one person to be rational.
When speaking to pro-lifers, I don’t have to write profanity laced responses threatening to hurt pro-lifers. That makes no sense. The fact that they’re easily driven to that point in one discussion after another is good indication to onlookers that pro-lifers are wrong. The backfire effect is useful to me, so I don’t want to prevent it.
In any case, you say I contradict myself -- which would be correct if there wasn’t context. People, generally speaking, are irrational. I did, however, grant that some people are rational. I also said that my concern isn’t the individual I’m debating, but rather the audience. I trust that some people in the audience are rational and therefore, can understand my arguments. So there’s no contraction -- only one of your own making and for your own convenience. On SE, you need most people to be rational in order for your one-on-one approach to work. The person you’re speaking to has to be able to see and comprehend the flaws in their own reasoning. I don’t trust people, in general, to be able to do that -- hence my aversion to one-on-one discussions. I trust that members in the audience will have these capacities, hence making my approach more efficient and effective.
As for data, you’re being disingenuous. That the so-called emotional brain is more prevalent in our decision-making is well-established. That we make reckless decisions when sexually aroused is well-documented. Most men wouldn’t consider sex with underage girls when not aroused. Ask them again when aroused and suddenly they consider sex with underage girls and even women they regard as unattractive. This trend is similar in women as well. Bids in auctions are made due to random digits.
Never mind all of the cognitive biases we come prepackaged with. Consider SPOT -- the notion that something is of more value just because one owns it. Then there are logical fallacies people are prone to -- a common one you commit here: straw man. People, generally speaking, are irrational and need to learn how to reason and think logically. I don’t trust any one person to have that capacity; that has to be demonstrated to me first and once it is, then I wouldn’t mind a one-on-one discussion. I keep good intellectual company, people I speak one-one-one with about a number of things; we keep each other sharp and polish each other’s ideas and arguments. In an audience of an unidentified number, a decent proportion is likely to be rational and that’s why I have public exchanges rather than private ones. I’ve even made private exchanges public, so that a potential audience can benefit from it.
This is why I’m having this exchange with you, in fact. Anyone who is rational will see that you straw manned what I said in order to say that I contradicted myself. The record clearly shows that I didn’t because I didn’t say all people are irrational. Also, consider who we’re talking about!
We’re talking about people who believe Iron Age goat herders were right about the entirety of the universe: how it came to be; how life came to be; the meaning of human life; whether or not there are aliens and whether or not they’re intelligent; what can and cannot exist. I don’t expect anyone with such breathtaking arrogance to be rational and, for the most part, believers aren’t. They’re fond of rolling out all sorts of fallacies and cognitive biases, most especially ad hominem, ad hoc reasoning, straw man, abusive fallacy, confirmation bias, special pleading, non sequitur, statistical fallacies like Hoyle’s fallacy, cognitive dissonance, etc. That’s apart from omitting evidence, ignoring contrary evidence, and falsely believing they’ve provided necessary and sufficient conditions for anything they might define, e.g., human being in the context of the abortion debate. I don’t trust a one-on-one discussion with people like that; the audience is what matters and these six years have proven that to me conclusively.
You can go on about the effectiveness of SE, but it’s not as effective and certainly not more effective than the approach I use. I rely on a number of potentially rational people to decide between which case is irreparably flawed and which is better and true. You rely on one person to be rational though it’s highly probable the individual is simply not. Especially in the case of religious believers, they don’t want to know what’s right; they already believe they’ve located the truth and can be given no reason to seek it, unless they learn how to reason and employ logic more effectively -- and that usually happens when they’re part of an audience: online forum, public debate, classroom. Members in an audience are much more likely to have already gotten there and are thus, on the cusp of renouncing faith or are sitting undecided on the fence due to lack of information or even fear. My approach is undoubtedly more effective in disabusing them of this lack of information and fear, for example, of Hell or god’s wrath. SE puts undue focus on one person who is (highly) likely irrational.
I think this is the deciding point: it is much more likelier for an audience to have rational membership than for any one individual to be rational, especially to the degree that they can see the flaws of their own reasoning. This is why my approach works better than SE and always will. SE will work in Kant’s Kingdom of Ends in where all members are rational beings. This isn’t Kant’s Kingdom of Ends. The United States today is far from that! About 40% of people deny evolution; that same figure denies climate change; even more believe fetuses deserve rights over and above women; something around 50% believe in psychic phenomena and the paranormal. This is no kingdom with rational beings.
There are demonstrably fewer rational people in this country than irrational people; it’s unfortunate but no less true, so SE will fail to work more often than not. My approach relies on much more probable rationality in some members within a crowd; that’s why I’ve changed minds. All I see in SE conversations is “you can’t pretend to know what you don’t know” -- which amounts to an accusation in some cases because sometimes a person does actually know what they’re purporting to know and can show you how they came to know it. SE pretty much guarantees the discussion devolving into a frustrating exercise; my approach doesn’t guarantee that because I’m engaging a medium, but not the audience directly and as such, the audience members are much less likely to become frustrated.
How did my journey to atheism begin? I was a member in an audience, in a classroom to be more specific. I didn’t engage in a one-on-one discussion with a well-informed atheist. Being in an audience and having an inclination to rational thought catalyzed my journey. I’m certain that most atheists will relate to my experience and will not confirm that an SE-like one-on-one discussion started it all. Heck, I’m reasonably certain an SE discussion wasn’t the impetus of your journey either. What has worked for most of us will continue to work for future members of our community. SE simply won’t fair better for the reasons outlined.
9 notes
·
View notes
Text
WHY FACTS DON’T CHANGE OUR MINDS. NEW DISCOVERY ABOUT THE HUMAN MIND SHOWS THE LIMITATIONS OF REASON.02/19/2017.
— By Elizabeth Kolbert .
“In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones. Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances. As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.” A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.
Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from. The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context. Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective. Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile. If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Mercier and Sperber prefer the term “myside bias. ”Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own. A recent experiment performed by Mercier and some European colleagues neatly demonstrates this assymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.”
— ELIZABETH KOLBERT .
0 notes
Text
IAT
My friend and I got into a discussion about six months ago regarding the implicit bias tests put out by Harvard and their validity. He questioned their validity and had some articles to bolster his argument, and I disagreed with him. I thought folks might be interested in a few of my points.
In response to this article - https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html
and this one - https://digest.bps.org.uk/2018/12/05/psychologys-favourite-tool-for-measuring-implicit-bias-is-still-mired-in-controversy/
Here is what I said:
“what's up man,
so i read the two iat articles you sent me and found them interesting - so cool to be in grad school and be exchanging ideas on all sorts of things. i did want to get back to you and say that i read those pieces and looked at several scientific articles too (mostly by john jost and collaborators who developed the iat, but also investigators not affiliated with them). i maintain my position from yesterday that the iat is rigorous and that its structural framework can get at implicit biases. further, i would argue that there is a lack of sound logical integrity as well as generally flawed reasoning in the critiques of the iat you sent me. i'd love to share some of these thoughts as well as some studies and meta-analyses (and brief thoughts on these too) that look at associations between implicit bias and behavioral outcomes. sorry about this long email and inconsistent punctuation haha, but here are some of my personal opinions.
addressing the article from the cut first: i admit that it looks like the developers of the iat exaggerated the predictive powers of the iat if they said that it can shed light on "unconscious endorsements" people make of certain groups. this article goes on to flesh out this position and discuss how it is familiarity with certain stereotypes rather than actual endorsements of these stereotypes that can cause, for example, activists to score as high on these tests as non-activists. here are some quotes i've bolded:
"
experimenters were able to easily induce what the IAT would interpret as “implicit bias” against Noffians simply by forming an association between them and downtroddenness in general."
and also "Andreychik and Gill found that for those students who endorsed external explanations for the plight of African-Americans or a novel group, or who were induced to do so, high IAT scores correlated with
greater
degrees of explicitly reported more compassion and empathy for those groups. For those who rejected, or were induced to reject, external explanations, the correlation was exactly reversed: High IAT scores predicted lower empathy and compassion. In other words, the IAT appeared to indicate very different things for people who did or didn’t accept external explanations for black people’s lower standing in society. This suggests that sometimes high IAT scores indicate that someone feels high degrees of empathy and compassion toward African-Americans, and believes that the group hasn’t been treated fairly. Now, it could be that such people
also
have high amounts of implicit bias, but it’s striking how easily IAT scores can be manipulated with interventions that don’t really have anything to do with implicit bias." "So the question of whether the IAT measures something that can be fairly called
animus
, in the sense of being a preference (in this case, an unconscious one) for one group over another, rather than familiarity with stereotypes, is
anything but
“ill-posed”. "
Blanton said that he has never seen a psychological instrument in which less statistical noise predictably biases the results upward or downward. “What should happen is that as you remove random noise, you just get a better estimate of [the thing being measured],” he explained. Blanton provided a surprising example of how this plays out in test sessions, according to his team’s math: If a race IAT test-taker is exactly 1 millisecond faster on each and every white/good as compared to black/bad trial, they “will get the most extreme label,” he said. That is, the test will tell them they are extremely implicitly biased despite their having exhibited almost zero bias in their actual performance. That’s an extreme example, of course, but Blanton says he’s confident this algorithmic quirk is “affecting real-world results,” and in the Assessment paper he and his colleagues published the results of a bunch of simulated IAT sessions which demonstrated as such."
"To be sure, there’s no perfect psychological instrument. They all have their flaws and shortcomings — sometimes maddening ones. But there may not be any instrument as popular and frequently used as the race IAT that is as riddled with uncertainty about what, exactly, it’s measuring, and with the sorts of methodological issues that in any other situations would cause an epidemic of arched eyebrows. “What I’ve been convinced of is it’s very difficult to break down the origins of these associations,” said Elizabeth Paluck, a prejudice and intergroup relations researcher at Princeton and a co-author on the “Noffians” study. “They can’t be all attributed to personal preference, they certainly come from cultural associations and conditioning.” As for the authors of the internal/external explanations paper, they note in it that “our analysis is perfectly compatible with the possibility that, perhaps for the majority of people, implicit negativity is likely to be prejudice-based.” But even if you accept that, it means for a substantial minority of people, the implicit negativity revealed by the IAT isn’t connected to prejudice — which is one reasonable way to interpret those underwhelming meta-analyses."
My contention with this part of the article is semantic in nature, because implicit bias IS familiarity and association between two things rather than any type of endorsement (e.g. if you grow up in the united states, even in the third millennium, you are likely to associate black people with violence and women with domestic life), which explains why openly hateful people and activists who spend a lot of time thinking about these associations might converge on the iat tests. It does not matter if your conscious or explicit biases are positive or how hard you work to fight your implicit biases (e.g. in the case of activists.) This article confuses explicit and implicit bias (probably in large part because the iat creators overestimated the predictive powers of the test as i mentioned and even made this semantic error themselves), but in reality, it is those implicit biases that predict how quickly a police officer will pull a trigger when startled by a black civilian who thrusts their hand in their pocket. explicit biases predict how well white people will get along with black people in intergroup settings because in those situations, you have time to reflect on your own prejudices (which the cut article even addresses and calls "overcompensating"). for more examples of quick reaction times in the context of implicit racial bias, i think blink by malcolm gladwell has a few good examples (though i'm guessing you've read it lol, and not that i am a huge lover of this book, because i'm not), as well as some of the articles i link in a few sentences. anecdotally (for what it's worth), i noticed in myself that after the BLM movement resurgence this summer, i was more likely to lunge in fear when addressed unsuspectingly by black homeless individuals in chicago (because i was implicitly associating black people with violence because of those two stimuli being juxtaposed on the news despite the fact that clearly the police officers were at fault and their black victims were totally innocent). also, i do not understand the article's hypothetical argument about how if a speedy test-taker is one millisecond faster on the white/good associations than on the black/bad ones, then they will get a score suggesting extremely high implicit bias against black people. if a freakshow statistical anomaly took place where the test-taker happened to be consistently but slightly slower on the black/bad bias responses but did not have that bias, then great, cool, but in all likelihood, the test would be measuring exactly what it purports to which is an unconscious negative feeling towards black people. yhis also relates to the article's discussion regarding how important explicit vs implicit bias is as a target of intervention and that the police situation at legal level in Ferguson is reflective of bias. Again, this has nothing to do with the validity of IAT - a rigorous study would look at correlations between implicit bias and implicit behavior, not explicit biases that can occur within the context of legal proceedings. The question that needs to be asked is whether the association between implicit bias and implicit behavior are rigorous and significant. Over and over again, we see that they are (links:
https://psyarxiv.com/582gh/
https://journals.sagepub.com/doi/10.1177/1368430215596075
https://psycnet.apa.org/record/2004-21198-003
https://journals.sagepub.com/doi/full/10.1177/0963721418797309
https://onlinelibrary.wiley.com/doi/abs/10.1111/pops.12401
https://journals.sagepub.com/doi/10.1177/0956797617694866
). my favorite example of my point is from Horwitz and Davidio 2015 - in this article, the investigators found that implicit biases in a population sample in favor of rich folks predicts that this sample will grant more amnesty to rich folks than poor folks when the rich ones cause a car accident. what creators purported to measure with it e.g. positive vs negative feelings toward certain groups is the mistake - does not mean the test is not a rigorous metric of implicit bias.
the other main argument the cut piece (and for that matter the research digest piece) makes regards the reliability and repeatability of the iat tests, showing low ~.4 relatedness coefficients. however, the article does not define the parameters used to assess reliability/measurement error in this context. For example, are we seeing totally random variance between test trials (e.g. is a test-taker gets extreme bias towards black people one week and extreme bias against black people the next? or is it more like slight bias one week and moderate bias the next? within the scheme of multiple trials across many individuals of course, and the average amount of shifting in scores averaged or statistically corrected for across many tests). in the latter case, low levels of reliability could reflect examinee's fear of being perceived as a racist upon second taking of the test leading to overthinking and anxiety, consciousness of possible biases that damn them towards unwanted prejudices, or "doctoring" how they take the test ie doing so in bad faith, for example moving more slowly on the white + good associations. Also, the iat test has been shown to be extremely reliable compared to other tests that measure the same type of thing (see Jost 2018, which is one of the articles linked above), e.g. blood pressure, a trait that is multifactorial (can be caused by anxiety, mood, diet, sleep) despite being stable over time (in the case of blood pressure, chronic cardiovascular health). Also, in studies that have truly found low correlation between implicit bias and implicit behaviors mentioned in the cut article, jost 2018 points out that this has to do with low methodological correspondence and the fact that these studies have rarely adjusted for measurement error.
The final part of the article talks about the harm of a potentially uninformative test like the iat making people feel unnecessarily bad about themselves and harming intergroup relations - both irrelevant to the validity of the iat by the way - though interestingly, the article points out the iat does have the power to do what it aims to (inform people of their unconscious associations - i find it rich that the article concedes this when it has sought to debunk it up to this point). some quotes: "
So there is nothing wrong with implicit-bias training that covers this sort of research. Nor is there anything wrong with IAT-based trainings which merely explain to people that they may well be carrying around certain associations in their head they are unaware of, and that researchers have uncovered patterns about who is more likely to demonstrate which response-time differences. In situations where one group holds historic or current-day power over the other, for example, members of the in-group do tend to score higher on the IAT than the out-group. Some of these between-group differences appear to be pretty robust, and they deserve further study. These are all worthwhile subjects to discuss, as long as it is made clear to test-takers that their scores do not predict their behavior." "
So it’s an open question, at least: The scientific truth is that we don’t know exactly how big a role implicit bias plays in reinforcing the racial hierarchy, relative to countless other factors. We do know that after almost 20 years and millions of dollars’ worth of IAT research, the test has a markedly unimpressive track record relative to the attention and acclaim it has garnered. Leading IAT researchers haven’t produced interventions that can reduce racism or blunt its impact. They haven’t told a clear, credible story of how implicit bias, as measured by the IAT, affects the real world. They have flip-flopped on important, baseline questions about what their test is or isn’t measuring. And because the IAT and the study of implicit bias have become so tightly coupled, the test’s weaknesses have caused collateral damage to public and academic understanding of the broader concept itself. As Mitchell and Tetlock argue in their book chapter, it is “difficult to find a psychological construct that is so popular yet so misunderstood and lacking in theoretical and practical payoff” as implicit bias. They make a strong case that this is in large part due to problems with the IAT.
Unless and until new research is published that can effectively address the countless issues with the implicit association test, it might be time for social psychologists interested in redressing racial inequality to reexamine their decision to devote so much time and energy to this one instrument. In the meantime, the field will continue to be hampered in its ability to provide meaningful answers to basic questions about how implicit bias impacts society, because answering those questions requires accurate tools. So, contra Banaji, scrutinizing the IAT and holding it to the same standards as any other psychological instrument isn’t a sign that someone doesn’t take racism seriously: It’s exactly the opposite." In this case, it is hard to know what these "standards" are. At this point, it seems like the author's main contention is that the IAT creators almost misinterpreted the mandate of their test, which again, I agree is true (they confused explicit and implicit bias and overstated the power of IAT results to predict explicit-bias based behavior). However, this article hardly discusses specific standards in light of which the IAT needs to be revamped or interpreted and to which any rigorous psychological testing battery should be subject.”
Here is an extra correction I made - “oh my point at the end of the second paragraph "what creators purported to measure with it e.g. positive vs negative feelings toward certain groups is the mistake - does not mean the test is not a rigorous metric of implicit bias" refers to the iat itself, not to the horwitz and davidio article.”
0 notes
Text
The problem with TEF – a look at the technical failings
Professor DV Bishop outlines the multiple flaws in the TEF methodology
In a previous post I questioned the rationale and validity of the Teaching Excellence and Student Outcomes Framework (TEF). Here I document the technical and statistical problems with TEF.
How are statistics used in TEF?
Two types of data are combined in TEF: a set of ‘contextual’ variables, including student backgrounds, subject of study, level of disadvantage, etc., and a set of ‘quality indicators’ as follows:
Student satisfactionas measured by responses to a subset of items from the National Student Survey (NSS)
Continuation – the proportion of students who continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA)
Employment outcomes- what students do after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey (DLHE)
As detailed further below, data on the institution’s quality indicators is compared with the ‘expected value’ that is computed based on the contextual data of the institution. Discrepancies between obtained and expected values, either positive or negative, are flagged and used, together with a written narrative from the institution, to rate each institution as Gold, Silver or Bronze. This beginner’s guide provides more information.
Problem 1: Lack of transparency and reproducibility
When you visit the DfE’s website, the first impression is that it is a model of transparency. On this site, you can download tables of data and even consult interactive workbooks that allow you to see the relevant statistics for a given provider. Track through the maze of links and you can also find an 87-page technical document of astounding complexity that specifies the algorithms used to derive the indicators from the underlying student data, DLHE survey and NSS data.
The problem, however, is that nowhere can you find a script that documents the process of deriving the final set of indicators from the raw data: if you try to work this out from first principles by following the HESA guidance on benchmarking, you run into the sand, because the institutional data is not provided in the right format. When I asked the TEF metrics team about this, I was told: “The full process from the raw data in HESA/ILR returns, NSS etc. cannot be made fully open due to data protection issues, as there is sensitive student information involved in the process.” But this seems disingenuous. I can see that student data files are confidential, but once this information has been extracted and aggregated at institutional level, it should be possible to share it. If that isn’t feasible, then the metrics team should be able to at least generate some dummy data sets, with scripts that would do the computations that convert the raw metrics into the flags that are used in TEF rankings.
As someone interested in reproducibility in science, I’m all too well aware of the problems that can ensue if the pipeline from raw data to results is not clearly documented – this short piece by Florian Markowetz makes the case nicely. In science and beyond, there are some classic scare stories of what can happen when the analysis relies on spreadsheets: there’s even a European Spreadsheet Risks Interest Group. There will always be errors in data – and sometimes also in the analysis scripts: the best way to find and eradicate them is to make everything open.
Problem 2: The logic of benchmarking
The idea of benchmarking is to avoid penalising institutions that take on students from disadvantaged backgrounds:
“Through benchmarking, the TEF metrics take into account the entry qualifications and characteristics of students, and the subjects studied, at each university or college. These can be very different and TEF assessment is based on what each college or university achieves for its particular students within this context. The metrics are also considered alongside further contextual data, about student characteristics at the provider as well as the provider’s location and provision.”
One danger of benchmarking is that it risks entrenching disadvantage. Suppose we have institutions X and Y, which are polar opposites in terms of how well they treat students. X is only interested in getting student fees, does not teach properly, and does not care about drop-outs – we hope such cases are rare, but, as this Panorama exposé showed, they do exist, and we’d hope that TEF would expose them. Y, by contrast, fosters its students and does everything possible to ensure they complete their course. Let us further suppose that X offers a limited range of vocational courses, whereas Y offers a wider range of academic subjects, and that X has a higher proportion of disadvantaged students. Benchmarking ensures that X will be evaluated relative to other institutions offering similar courses to a similar population. This can lead to a situation where, because poor outcomes at X are correlated with its subject and student profile, expectations are low, and poor scores for student satisfaction and completion rates are not penalised.
Benchmarking is well-intentioned – its aim is to give institutions a chance to shine even if they are working with students who may struggle to learn. However, it runs the risk of making low expectations acceptable. It could be argued that, while there are characteristics of students and courses that affect student outcomes, in general, higher education institutions should not be offering courses where there is a high probability of student drop-out. And students would find it more helpful to see raw data on drop-out rates and student satisfaction, than to merely be told that an institution is Bronze, Silver or Gold – a rating that can only be understood in relative terms.
Problem 3: The statistics of benchmarking
The method used to do benchmarking comes from Draper and Gittoes (2005), and is explained here. A more comprehensive statistical treatment and critique can be found here. Essentially, you identify background variables that predict outcomes, assess typical outcomes associated with each combination of these in the whole population under consideration, and then calculate an ‘expected’ score, as a mean of these combinations, weighted by the frequency of each combination at the institution.
The obtained score may be higher or lower than the ‘expected’ value. The question is how you interpret such differences, bearing in mind that some variation is expected just due to random fluctuations. The precision of the estimate of both observed and expected values will increase as the sample size increases: you can compute a standard error around the difference score, and then use statistical criteria to identify cases with difference scores that are likely to be meaningful and not just down to random noise. However, where there is a small number of students, it is hard to distinguish a genuine effect from noise, but where there is a very large number, even tiny differences will be significant. The process used in benchmarking uses statistical criteria to assign ‘flags’ to indicate scores that are extremely good (++), or good (+), or extremely bad (–) or bad (-) in relation to expectation. To ameliorate the problem of tiny effects being flagged in large samples, departures from expectation are flagged only if they exceed a specific number of percentage points.
This is illustrated for the case of one of the NSS measurements in Figure 1, which shows that the problem of sample size has not been solved: a large institution is far more likely to get a flagged score (either positive or negative) than a small one. Indeed, a small institution is a pretty safe bet for a silver award.
Figure 1. The Indicator (x-axis) is the percentage of students with positive NSS ratings, and the z-score (y-axis) shows how far this value is from expectation based on benchmarks. The plot illustrates several things: (a) the range of indicators becomes narrower as sample size increases; (b) most scores are bunched around 85%; (c) for large institutions, even small changes in indicators can make a big difference to flags, whereas for small institutions, most are unflagged, regardless of the level of indicator; (d) the number of extreme flags (filled circles or asterisks) is far greater for large than small institutions.
Problem 4: Benchmarking won’t work at subject level
From a student perspective, it is crucial to have information about specific courses; institution-wide evaluation is not much use to anyone other than vice-chancellors who wish to brag about their rating. However, the problems I have outlined with small samples are amplified if we move to subject-level evaluation. I raised this issue with the TEF metrics team, and was told:
‘The issue of smaller student numbers ‘defaulting’ to silver is something we are aware of. Paragraph 94 on page 29 of the report on findings from the first subject pilot mentions some OfS analysis on this. The Government consultation response also has a section on this. On page 40, the government response to question 10 refers to assessability, and potential methods that could be used to deal with this in future runs of the TEF.’
So the OfS knows they have a problem, but seems determined to press on, rather than rethinking the exercise.
Problem 5: You’ll never be good enough
The benchmarks used in TEF are based on identifying statistical outliers. Forget for a moment the sample size issue, and suppose we have a set of institutions with broadly the same large number of students, and a spread of scores on a metric, such that the mean percentage meeting criterion is 80%, with a standard deviation of 2% (see Figure 2). We flag the bottom 10% (those with scores below 77.5%) as problematic. In the next iteration of the exercise, those with low scores have either gone out of business, improved their performance, or learned how to game the metric, and so we no longer have anyone scoring below 77.5%. The mean score thus increases and the standard error decreases. So now, on statistical grounds, a score below 78.1% gets flagged as problematic. In short, with a statistical criterion for poor performance, even if everyone improves dramatically, or poor-performers drop out, there will still be those at the bottom of the distribution – unless we get to a point where there is no meaningful variation in scores.
Figure 2: Simulated data showing how improvements in scores can lead to increasing cutoff in the next round if statistical criterion is adopted.
The bottom line
TEF may be summarised thus:
Take a heterogenous mix of variables, all of them proxy indicators for ‘teaching excellence’, which vary hugely in their reliability, sensitivity and availability
Transform them into difference scores by comparing them with ‘expected’ scores derived from a questionable benchmarking process
Convert difference scores to ‘flags’, whose reliability varies with the size of the institution
Interpret these in the light of qualitative information provided by institutions
All to end up with a three-point ordinal scale, which does not provide students with the information that they need to select a course.
Time, maybe, to ditch the TEF and encourage students to consult the raw data instead to find out about courses?
from RSSMix.com Mix ID 8239600 http://cdbu.org.uk/the-problem-with-tef-a-look-at-the-technical-failings/ via IFTTT
0 notes
Text
The problem with TEF – a look at the technical failings
Professor DV Bishop outlines the multiple flaws in the TEF methodology
In a previous post I questioned the rationale and validity of the Teaching Excellence and Student Outcomes Framework (TEF). Here I document the technical and statistical problems with TEF.
How are statistics used in TEF?
Two types of data are combined in TEF: a set of ‘contextual’ variables, including student backgrounds, subject of study, level of disadvantage, etc., and a set of ‘quality indicators’ as follows:
Student satisfactionas measured by responses to a subset of items from the National Student Survey (NSS)
Continuation – the proportion of students who continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA)
Employment outcomes- what students do after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey (DLHE)
As detailed further below, data on the institution’s quality indicators is compared with the ‘expected value’ that is computed based on the contextual data of the institution. Discrepancies between obtained and expected values, either positive or negative, are flagged and used, together with a written narrative from the institution, to rate each institution as Gold, Silver or Bronze. This beginner’s guide provides more information.
Problem 1: Lack of transparency and reproducibility
When you visit the DfE’s website, the first impression is that it is a model of transparency. On this site, you can download tables of data and even consult interactive workbooks that allow you to see the relevant statistics for a given provider. Track through the maze of links and you can also find an 87-page technical document of astounding complexity that specifies the algorithms used to derive the indicators from the underlying student data, DLHE survey and NSS data.
The problem, however, is that nowhere can you find a script that documents the process of deriving the final set of indicators from the raw data: if you try to work this out from first principles by following the HESA guidance on benchmarking, you run into the sand, because the institutional data is not provided in the right format. When I asked the TEF metrics team about this, I was told: “The full process from the raw data in HESA/ILR returns, NSS etc. cannot be made fully open due to data protection issues, as there is sensitive student information involved in the process.” But this seems disingenuous. I can see that student data files are confidential, but once this information has been extracted and aggregated at institutional level, it should be possible to share it. If that isn’t feasible, then the metrics team should be able to at least generate some dummy data sets, with scripts that would do the computations that convert the raw metrics into the flags that are used in TEF rankings.
As someone interested in reproducibility in science, I’m all too well aware of the problems that can ensue if the pipeline from raw data to results is not clearly documented – this short piece by Florian Markowetz makes the case nicely. In science and beyond, there are some classic scare stories of what can happen when the analysis relies on spreadsheets: there’s even a European Spreadsheet Risks Interest Group. There will always be errors in data – and sometimes also in the analysis scripts: the best way to find and eradicate them is to make everything open.
Problem 2: The logic of benchmarking
The idea of benchmarking is to avoid penalising institutions that take on students from disadvantaged backgrounds:
“Through benchmarking, the TEF metrics take into account the entry qualifications and characteristics of students, and the subjects studied, at each university or college. These can be very different and TEF assessment is based on what each college or university achieves for its particular students within this context. The metrics are also considered alongside further contextual data, about student characteristics at the provider as well as the provider’s location and provision.”
One danger of benchmarking is that it risks entrenching disadvantage. Suppose we have institutions X and Y, which are polar opposites in terms of how well they treat students. X is only interested in getting student fees, does not teach properly, and does not care about drop-outs – we hope such cases are rare, but, as this Panorama exposé showed, they do exist, and we’d hope that TEF would expose them. Y, by contrast, fosters its students and does everything possible to ensure they complete their course. Let us further suppose that X offers a limited range of vocational courses, whereas Y offers a wider range of academic subjects, and that X has a higher proportion of disadvantaged students. Benchmarking ensures that X will be evaluated relative to other institutions offering similar courses to a similar population. This can lead to a situation where, because poor outcomes at X are correlated with its subject and student profile, expectations are low, and poor scores for student satisfaction and completion rates are not penalised.
Benchmarking is well-intentioned – its aim is to give institutions a chance to shine even if they are working with students who may struggle to learn. However, it runs the risk of making low expectations acceptable. It could be argued that, while there are characteristics of students and courses that affect student outcomes, in general, higher education institutions should not be offering courses where there is a high probability of student drop-out. And students would find it more helpful to see raw data on drop-out rates and student satisfaction, than to merely be told that an institution is Bronze, Silver or Gold – a rating that can only be understood in relative terms.
Problem 3: The statistics of benchmarking
The method used to do benchmarking comes from Draper and Gittoes (2005), and is explained here. A more comprehensive statistical treatment and critique can be found here. Essentially, you identify background variables that predict outcomes, assess typical outcomes associated with each combination of these in the whole population under consideration, and then calculate an ‘expected’ score, as a mean of these combinations, weighted by the frequency of each combination at the institution.
The obtained score may be higher or lower than the ‘expected’ value. The question is how you interpret such differences, bearing in mind that some variation is expected just due to random fluctuations. The precision of the estimate of both observed and expected values will increase as the sample size increases: you can compute a standard error around the difference score, and then use statistical criteria to identify cases with difference scores that are likely to be meaningful and not just down to random noise. However, where there is a small number of students, it is hard to distinguish a genuine effect from noise, but where there is a very large number, even tiny differences will be significant. The process used in benchmarking uses statistical criteria to assign ‘flags’ to indicate scores that are extremely good (++), or good (+), or extremely bad (–) or bad (-) in relation to expectation. To ameliorate the problem of tiny effects being flagged in large samples, departures from expectation are flagged only if they exceed a specific number of percentage points.
This is illustrated for the case of one of the NSS measurements in Figure 1, which shows that the problem of sample size has not been solved: a large institution is far more likely to get a flagged score (either positive or negative) than a small one. Indeed, a small institution is a pretty safe bet for a silver award.
Figure 1. The Indicator (x-axis) is the percentage of students with positive NSS ratings, and the z-score (y-axis) shows how far this value is from expectation based on benchmarks. The plot illustrates several things: (a) the range of indicators becomes narrower as sample size increases; (b) most scores are bunched around 85%; (c) for large institutions, even small changes in indicators can make a big difference to flags, whereas for small institutions, most are unflagged, regardless of the level of indicator; (d) the number of extreme flags (filled circles or asterisks) is far greater for large than small institutions.
Problem 4: Benchmarking won’t work at subject level
From a student perspective, it is crucial to have information about specific courses; institution-wide evaluation is not much use to anyone other than vice-chancellors who wish to brag about their rating. However, the problems I have outlined with small samples are amplified if we move to subject-level evaluation. I raised this issue with the TEF metrics team, and was told:
‘The issue of smaller student numbers ‘defaulting’ to silver is something we are aware of. Paragraph 94 on page 29 of the report on findings from the first subject pilot mentions some OfS analysis on this. The Government consultation response also has a section on this. On page 40, the government response to question 10 refers to assessability, and potential methods that could be used to deal with this in future runs of the TEF.’
So the OfS knows they have a problem, but seems determined to press on, rather than rethinking the exercise.
Problem 5: You’ll never be good enough
The benchmarks used in TEF are based on identifying statistical outliers. Forget for a moment the sample size issue, and suppose we have a set of institutions with broadly the same large number of students, and a spread of scores on a metric, such that the mean percentage meeting criterion is 80%, with a standard deviation of 2% (see Figure 2). We flag the bottom 10% (those with scores below 77.5%) as problematic. In the next iteration of the exercise, those with low scores have either gone out of business, improved their performance, or learned how to game the metric, and so we no longer have anyone scoring below 77.5%. The mean score thus increases and the standard error decreases. So now, on statistical grounds, a score below 78.1% gets flagged as problematic. In short, with a statistical criterion for poor performance, even if everyone improves dramatically, or poor-performers drop out, there will still be those at the bottom of the distribution – unless we get to a point where there is no meaningful variation in scores.
Figure 2: Simulated data showing how improvements in scores can lead to increasing cutoff in the next round if statistical criterion is adopted.
The bottom line
TEF may be summarised thus:
Take a heterogenous mix of variables, all of them proxy indicators for ‘teaching excellence’, which vary hugely in their reliability, sensitivity and availability
Transform them into difference scores by comparing them with ‘expected’ scores derived from a questionable benchmarking process
Convert difference scores to ‘flags’, whose reliability varies with the size of the institution
Interpret these in the light of qualitative information provided by institutions
All to end up with a three-point ordinal scale, which does not provide students with the information that they need to select a course.
Time, maybe, to ditch the TEF and encourage students to consult the raw data instead to find out about courses?
from CDBU http://cdbu.org.uk/the-problem-with-tef-a-look-at-the-technical-failings/ via IFTTT
0 notes
Text
How Public Is Education?: CPAA Final Analysis
Allison Haubenstricker
Joe Lewis
English 111
13 November, 2017
In a society where advancements are being made across all cultures and topics, a few topics remain a bit more controversial than others. Education is constantly being questioned about whether or not it is fairly distributed to students across the United States. Over the course of this paper I will be analyzing different sources that try to answer that question. I find this topic very interesting and important because I am pursuing a degree that requires me to go to college for eight years and I have two young siblings that will be going through public school education for the next four years.
Nowadays, people all across the world use the internet as their main news outlet. One source on the topic of education is a video by the National Civil Rights Museum, in their video entitled “Education Equity: Education is a Civil Right.” This is a stance formed by an entire organization, and not just by one individual like most pieces of writing are which helps it seem far more organized and concrete. The National Civil Rights Museum created this video in order to help others learn from history, like all museums do. However, in this instance they are trying to actively bring about a change to the education system, by asking people from struggling areas to act as mentors to students who have the ability to do better. The group references multiple facts, and focuses on the low-income families and schools that can’t provide for their students as well as other public schools. Despite the fact that there is no narration over the video, it is still effective because the reader is forced to read the captions across the screen which makes them more connected to what they are reading rather than just someone lecturing to them about the same subject. The video draws the audience in with a hook by asking their reader to place themselves in a situation, which intrigues the reader in the conversation before it begins to get more in depth. It then begins to appeal to the audience’s logos, or logic, by supplying them with multiple facts that show how truly unfair the education system is. The facts that they are supplying soon turn somber and provide an appeal to pathos, influencing the audience to feel bad about the large amount of people that are affected by these statistics. This video provides multiple examples of how the education system in the United States is really lacking when it comes to supplying fair education to those that can’t afford more. It focuses in on how the wage gap is also creating an education gap, and how people from these high-poverty neighborhoods really need to step up in order to boost the importance of education.
An article with a similar stance attributes different reasons for this phenomenon in schools. Titled “Equity and Equality Are Not Equal,” writer Blair Mann explains that despite the fact that the education system is, in fact, very close to equal, it does not work to help those who need more in order to get the same education. The author worked for this blog called “The Education Trust” before retiring. She has a degree in political science from Philadelphia University and a master’s degree from George Washington University in public policy. After graduating, she worked in Washington D.C. for a U.S. Representative. In this article she addresses not only people who believe that education is equal, but also those that believe that it isn’t. Instead of actively trying to seek her reader’s support, Ms. Mann tries to get her reader to understand the flaw in many people’s thoughts on the education system. The majority of her paper is an appeal to logos, explaining how the education system is not applying basic principles that would be better for those that are struggling. To end the article she uses appeal to ethos by supplying her professional information at the bottom of the page, making it rather easy for the reader to see what an expert she is. Instead of trying to get the reader to see things her way, the writer focuses on trying to level the playing field and simply explaining the difference between two concepts. The writer is able to help the audience understand that because these different parties cannot communicate well, they are fighting for rather useless reasons. Many people debate between whether or not the education system is equal mainly because they don’t view the term ‘equal’ the same way. Ms. Mann introduces a different idea, 'equity,’ which really means that different amounts of help are given to people so that in the end, they are all on the same level. She also clarifies that equality is simply putting in the same amount of effort for everyone while ignoring their circumstances.
Another video that explored the situation of continuous divide regarding education is entitled “Ensuring Educational Equity for All Students” by The Leadership Conference. The Leadership Conference is a group that focuses on civil rights for all. As a political group that seeks action, they are trying to get their readers to be active in the civil rights process. Their video and their entire organization is aimed to bring change, so their audience is anyone who is willing to help change the education standards. Throughout the video, a narrator explains their ideas on how the education system is unfair while animations describing what she’s speaking about play on the screen. In the video, she mainly focuses on appeal to logos by listing fact after fact about the state of public education, such as how often people drop out, how much education students are actually being provided with, and more. Much of this video is an appeal to logos, or logic, because it is using fact after fact to convince the audience that there is a problem. This video mainly focused on how different children have different educational struggles, such as those with disabilities and unfair educational disadvantages like poor school systems. Much like the article written by Blair Mann, this video also mentions the fact that instead of applying equality to students, we should be using equity so that it is much easier for them to reach the same level of education.
Out of my three sources, two of them, despite being from different medias, touched on the same idea of applying equity instead of equality. All of them agreed that the education system is not being fairly distributed to children across the United States, and request a call to action to the audience. These arguments have really helped me realize that the argument of education equality is much deeper than I originally thought. However, the problem is that much of the population still does not want to hear the argument. Many continue to point fingers at other causes for this education gap, and others simply argue that there is no problem in the education system because it is equal. There are so many different areas to blame and really, education is equal. However, these articles have made me realize that education has no equity, and the only place we can point the blame to is at ourselves.
https://www.youtube.com/watch?v=e_feXDXgBvM
https://edtrust.org/the-equity-line/equity-and-equality-are-not-equal/
https://www.youtube.com/watch?v=CjrFnmeGtL8
0 notes
Text
Why Facts Don’t Change Our Minds
New Post has been published on http://www.buzzbasement.com/why-facts-dont-change-our-minds/
Why Facts Don’t Change Our Minds
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”
A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.
Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.
The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.
Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.
In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.
Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”
Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone�� (Riverhead), with a look at toilets.
Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?
In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)
Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.
“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.
This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.
Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)
Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.
“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.
Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”
One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.
In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)
The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.
The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”
“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦
0 notes
Text
Why Facts Don’t Change Our Minds
New Post has been published on http://www.buzzbasement.com/why-facts-dont-change-our-minds/
Why Facts Don’t Change Our Minds
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”
A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.
Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.
The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.
Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.
In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.
Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”
Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.
Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?
In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)
Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.
“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.
This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.
Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)
Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.
“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.
Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”
One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.
In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)
The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.
The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”
“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦
0 notes
Text
Why Facts Don’t Change Our Minds
New Post has been published on http://www.buzzbasement.com/why-facts-dont-change-our-minds/
Why Facts Don’t Change Our Minds
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”
A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.
Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.
The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.
Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.
In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.
Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”
Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.
Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?
In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)
Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.
“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.
This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.
Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)
Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.
“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.
Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”
One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.
In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)
The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.
The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”
“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦
0 notes
Text
TEF: an ill-conceived solution to a wrongly-posed problem
The Teaching Excellence and Student Outcomes Framework represents a lamentable failure to engage in critical thinking, argues Professor DV Bishop
If you criticise the Teaching Excellence and Student Outcomes Framework (TEF), you run the risk of being dismissed as an idle, port-quaffing don, who dislikes teaching, is out-of-touch with students and resistant to any transparency or scrutiny. It’s important, therefore, to emphasise that CDBU has ‘teaching’ or ‘students’ in four of its nine aims. Without students, our universities would be nothing, and fostering the intellectual skills of the next generation is one of the most satisfying parts of an academic job. The TEF, however, is an ill-conceived, simplistic solution to a wrongly-posed problem, which – as Norman Gowar noted in his blogpost – risks damaging the higher education sector.
The market-driven ethos of TEF, bad though that is, is not the only issue. Even when evaluated against its own objectives, TEF is a miserable apology for a process, which shows no evidence of the critical thinking that we are encouraged to engender in our students.
I’ve written numerous commentaries on different aspects of TEF over the years, with my views being summarised in this CDBU last November. As I start to draft the CDBU’s response to the independent review of the TEF, I’ve found it helpful to summarise the specific ways in which it fails to meet its goals.
Is TEF needed?
When Jo Johnson first introduced the idea of TEF, he pointed to various sources of evidence to support the view that teaching in our universities was in poor shape and needed shaking up:
a) Student dissatisfaction, as evidenced by responses to the National Student Survey (NSS)
b) Student perception of ‘lack of value for money’ of their degree
c) Employer concerns that students had not been suitably trained for the workforce.
Back in 2015, I looked at the evidence he cited for all three claims, and found there was egregious misrepresentation of published data. As far as I am aware, no better evidence has been produced to support these claims since that time.
Another point was that a focus on research had devalued teaching. This was presented by Johnson without any supporting evidence. Nevertheless, most would agree that greater prestige attaches to research than to teaching, and providing strong financial incentives via the Research Excellence Framework (REF) increases the divide between teaching and research. However, it shows a distinct lack of imagination to conclude that the only way to overcome the adverse incentives introduced by the REF is to balance it with another evaluation system that will generate competing incentives linked to teaching. There are, as I propose below, other solutions.
Does the TEF provide a valid measure of teaching quality?
Even the chair of the TEF panel agreed that the answer to this question is No – Chris Husbands noted that student satisfaction is a poor proxy for teaching quality. Other metrics used in the TEF focus on student outcomes. The modification of the name of the exercise to Teaching Excellence and Student Outcomes is a capitulation on that point, though a more accurate rebrand would be Student Outcomes and Dissatisfaction Framework (SODF).
It is clear from the DfE’s own evaluation of TEF that students don’t understand what TEF is. This is not surprising given the misleading acronym. Two-thirds of those polled assumed that the TEF ratings were based on direct assessment of teaching (p. 85), and 96% endorsed the statement that ‘TEF awards are based on the quality of teaching’ (p. 87). Remarkably, the DfE report treated this as a correct response.
Are students helped by TEF to select a course?
To date, TEF rankings have been made at institutional level, so are not helpful for selecting a course. It’s recognised that what students want is course-level information, and moves are afoot to introduce subject-level TEF. However, the methodology of TEF, bad as it is, gets even worse when the units of assessment involve small numbers. Which brings us to….
Are sound statistical methods used in TEF?
The answer is No. This is a topic I have blogged about previously, and with the release of new data, I’ve been re-evaluating the situation. The issues are fairly technical and difficult for non-experts to understand, which may be why the architects of TEF have ignored advice from the Royal Statistical Society.
The Royal Statistical Society (RSS) was alarmed by the serious and numerous flaws in the last Teaching Excellence Framework (TEF) consultation process, conducted in 2016. Our concerns appeared not to be adequately addressed by the Department for Education (DfE). Indeed, the DfE’s latest TEF consultation exercise, which will shortly close, suggests that few statistical lessons have been learned from 2016’s experience. As we argue, below, there is a real risk that the latest consultation’s statistically inadequate approach will lead to distorted results, misleading rankings and a system which lacks validity and is unnecessarily vulnerable to being ‘gamed’.
This topic is large enough to merit a separate blogpost (coming soon!), but the bottom line is that the data from NSS are distributed in a way that makes it impossible to make reliable distinctions between the majority of institutions. This problem is exacerbated when the statistics are based on small numbers. Consequently, the idea that the TEF methodology will work at subject level is deeply flawed. There are also concerns about the transparency and reproducibility of the analyses behind TEF.
Are there alternative approaches to achieving TEF’s aims?
The answer is a clear Yes.
First, it’s absolutely right that students need good information before embarking on a course, and they should be encouraged by their schools to seek this out. Students may have different priorities, which is why condensing a wealth of information about NSS responses, drop-outs and employment outcomes into a 3-point scale is unhelpful. Instead, they should be looking at the specific indicators for the course they are considering. Unistats provides that information, in a way that encourages students to compare courses.
Second, the relative undervaluing of teaching relative to research in our universities is reflected in the rise of the ‘academic precariat’, which includes a swathe of teaching staff on insecure contracts. Students are being taught by sessional staff who come and go and may not have an office in the institution. There has always been a place for guest lecturers, or lectures delivered by early-career staff learning on the job, but it seems that nowadays there are students who never get taught by more experienced staff who are committed to the institution and its students. Rather than engaging in tortuous and logically dubious manipulations of results from proxy metrics, we should be providing information on who is doing the teaching, and what proportion of teaching is done by staff on long-term contracts. If this information were added to Unistat, then TEF would become obsolete, and universities would be incentivised to create more security for teaching staff.
from RSSMix.com Mix ID 8239600 http://cdbu.org.uk/tef-an-ill-conceived-solution-to-a-wrongly-posed-problem/ via IFTTT
0 notes
Text
TEF: an ill-conceived solution to a wrongly-posed problem
The Teaching Excellence and Student Outcomes Framework represents a lamentable failure to engage in critical thinking, argues Professor DV Bishop
If you criticise the Teaching Excellence and Student Outcomes Framework (TEF), you run the risk of being dismissed as an idle, port-quaffing don, who dislikes teaching, is out-of-touch with students and resistant to any transparency or scrutiny. It’s important, therefore, to emphasise that CDBU has ‘teaching’ or ‘students’ in four of its nine aims. Without students, our universities would be nothing, and fostering the intellectual skills of the next generation is one of the most satisfying parts of an academic job. The TEF, however, is an ill-conceived, simplistic solution to a wrongly-posed problem, which – as Norman Gowar noted in his blogpost – risks damaging the higher education sector.
The market-driven ethos of TEF, bad though that is, is not the only issue. Even when evaluated against its own objectives, TEF is a miserable apology for a process, which shows no evidence of the critical thinking that we are encouraged to engender in our students.
I’ve written numerous commentaries on different aspects of TEF over the years, with my views being summarised in this CDBU last November. As I start to draft the CDBU’s response to the independent review of the TEF, I’ve found it helpful to summarise the specific ways in which it fails to meet its goals.
Is TEF needed?
When Jo Johnson first introduced the idea of TEF, he pointed to various sources of evidence to support the view that teaching in our universities was in poor shape and needed shaking up:
a) Student dissatisfaction, as evidenced by responses to the National Student Survey (NSS)
b) Student perception of ‘lack of value for money’ of their degree
c) Employer concerns that students had not been suitably trained for the workforce.
Back in 2015, I looked at the evidence he cited for all three claims, and found there was egregious misrepresentation of published data. As far as I am aware, no better evidence has been produced to support these claims since that time.
Another point was that a focus on research had devalued teaching. This was presented by Johnson without any supporting evidence. Nevertheless, most would agree that greater prestige attaches to research than to teaching, and providing strong financial incentives via the Research Excellence Framework (REF) increases the divide between teaching and research. However, it shows a distinct lack of imagination to conclude that the only way to overcome the adverse incentives introduced by the REF is to balance it with another evaluation system that will generate competing incentives linked to teaching. There are, as I propose below, other solutions.
Does the TEF provide a valid measure of teaching quality?
Even the chair of the TEF panel agreed that the answer to this question is No – Chris Husbands noted that student satisfaction is a poor proxy for teaching quality. Other metrics used in the TEF focus on student outcomes. The modification of the name of the exercise to Teaching Excellence and Student Outcomes is a capitulation on that point, though a more accurate rebrand would be Student Outcomes and Dissatisfaction Framework (SODF).
It is clear from the DfE’s own evaluation of TEF that students don’t understand what TEF is. This is not surprising given the misleading acronym. Two-thirds of those polled assumed that the TEF ratings were based on direct assessment of teaching (p. 85), and 96% endorsed the statement that ‘TEF awards are based on the quality of teaching’ (p. 87). Remarkably, the DfE report treated this as a correct response.
Are students helped by TEF to select a course?
To date, TEF rankings have been made at institutional level, so are not helpful for selecting a course. It’s recognised that what students want is course-level information, and moves are afoot to introduce subject-level TEF. However, the methodology of TEF, bad as it is, gets even worse when the units of assessment involve small numbers. Which brings us to….
Are sound statistical methods used in TEF?
The answer is No. This is a topic I have blogged about previously, and with the release of new data, I’ve been re-evaluating the situation. The issues are fairly technical and difficult for non-experts to understand, which may be why the architects of TEF have ignored advice from the Royal Statistical Society.
The Royal Statistical Society (RSS) was alarmed by the serious and numerous flaws in the last Teaching Excellence Framework (TEF) consultation process, conducted in 2016. Our concerns appeared not to be adequately addressed by the Department for Education (DfE). Indeed, the DfE’s latest TEF consultation exercise, which will shortly close, suggests that few statistical lessons have been learned from 2016’s experience. As we argue, below, there is a real risk that the latest consultation’s statistically inadequate approach will lead to distorted results, misleading rankings and a system which lacks validity and is unnecessarily vulnerable to being ‘gamed’.
This topic is large enough to merit a separate blogpost (coming soon!), but the bottom line is that the data from NSS are distributed in a way that makes it impossible to make reliable distinctions between the majority of institutions. This problem is exacerbated when the statistics are based on small numbers. Consequently, the idea that the TEF methodology will work at subject level is deeply flawed. There are also concerns about the transparency and reproducibility of the analyses behind TEF.
Are there alternative approaches to achieving TEF’s aims?
The answer is a clear Yes.
First, it’s absolutely right that students need good information before embarking on a course, and they should be encouraged by their schools to seek this out. Students may have different priorities, which is why condensing a wealth of information about NSS responses, drop-outs and employment outcomes into a 3-point scale is unhelpful. Instead, they should be looking at the specific indicators for the course they are considering. Unistats provides that information, in a way that encourages students to compare courses.
Second, the relative undervaluing of teaching relative to research in our universities is reflected in the rise of the ‘academic precariat’, which includes a swathe of teaching staff on insecure contracts. Students are being taught by sessional staff who come and go and may not have an office in the institution. There has always been a place for guest lecturers, or lectures delivered by early-career staff learning on the job, but it seems that nowadays there are students who never get taught by more experienced staff who are committed to the institution and its students. Rather than engaging in tortuous and logically dubious manipulations of results from proxy metrics, we should be providing information on who is doing the teaching, and what proportion of teaching is done by staff on long-term contracts. If this information were added to Unistat, then TEF would become obsolete, and universities would be incentivised to create more security for teaching staff.
from CDBU http://cdbu.org.uk/tef-an-ill-conceived-solution-to-a-wrongly-posed-problem/ via IFTTT
0 notes