dareread
dareread
Dare to Read
162 posts
Let us tenderly and kindly cherish therefore, the means of knowledge. Let us dare to read, think, speak, and write. - John Adams
Don't wanna be here? Send us removal request.
dareread · 7 years ago
Link
We called the police because she wrecked her room and hit her mom… all because we took her phone,” Kelly’s father explained. He said that when the police arrived that evening, Kelly was distraught and told an officer that she wanted to kill herself. So an ambulance was called, and the 15-year-old was strapped to a gurney, taken to a psychiatric hospital, and monitored for safety before being released. Days after being hospitalized, Kelly was brought to my office by her parents who wanted to get help for their troubled girl.
Kelly’s parents spoke first. They said that their daughter’s hospitalization was the culmination of a yearlong downward spiral spurred by her phone obsession. Kelly had been refusing to spend time with her family or focus on school. Instead, she favored living her life on social media. A previously happy girl and strong student, Kelly had grown angry, sullen, and was now bringing home report cards with sinking grades. Kelly’s parents had tried many times in prior months to set limits on their daughter’s phone use, but she had become increasingly defiant and deceitful, even sneaking on her phone at all hours of the night.
When Kelly’s latest report card revealed a number of failing grades, her parents felt compelled to act. They told Kelly early in the afternoon on the day the police were called that she would need to turn in her phone by 9 p.m. But when the time came, Kelly refused, and a pushing match ensued between her and her parents, concluding in the violent tantrum that led the girl to be hospitalized.
I asked Kelly, who was sitting in a corner, to help me understand her perspective on that evening. She didn’t respond and instead glared at her parents. But then, surprising everyone in the room, she cried, “They took my f***ing phone!” Attempting to engage Kelly in conversation, I asked her what she liked about her phone and social media. “They make me happy,” she replied.
The Undoing of Families
As Kelly and her family continued their appointments with me in the coming months, two concerns dominated our meetings. The first was that Kelly’s unhealthy attachment to her phone continued, causing almost constant tension at home. The second concern emerged during my meetings with Kelly’s parents alone. Even though they were loving and involved parents, Kelly’s mom couldn’t help feeling that they’d failed their daughter and must have done something terribly wrong that led to her problems.
My practice as a child and adolescent psychologist is filled with families like Kelly’s. These parents say their kids’ extreme overuse of phones, video games, and social media is the most difficult parenting issue they face — and, in many cases, is tearing the family apart. Preteen and teen girls refuse to get off their phones, even though it’s remarkably clear that the devices are making them miserable. I also see far too many boys whose gaming obsessions lead them to forgo interest in school, extracurricular activities, and anything else productive. Some of these boys, as they reach their later teens, use their large bodies to terrorize parents who attempt to set gaming limits. A common thread running through many of these cases is parent guilt, as so many are certain they did something to put their kids on a destructive path.
What none of these parents understand is that their children’s and teens’ destructive obsession with technology is the predictable consequence of a virtually unrecognized merger between the tech industry and psychology. This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit. What these parents and most of the world have yet to grasp is that psychology — a discipline that we associate with healing — is now being used as a weapon against children.
“Machines Designed to Change Humans”
Nestled in an unremarkable building on the Stanford University campus in Palo Alto, California, is the Stanford Persuasive Technology Lab, founded in 1998. The lab’s creator, Dr. B.J. Fogg, is a psychologist and the father of persuasive technology, a discipline in which digital machines and apps — including smartphones, social media, and video games — are configured to alter human thoughts and behaviors. As the lab’s website boldly proclaims: “Machines designed to change humans.”
Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”
Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”
While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.
Weaponizing Persuasion
If you haven’t heard of persuasive technology, that’s no accident — tech corporations would prefer it to remain in the shadows, as most of us don’t want to be controlled and have a special aversion to kids being manipulated for profit. Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfill their basic human drives — to be social or obtain goals — better than real-world alternatives. Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.
While persuasion techniques work well on adults, they are particularly effective at influencing the still-maturing child and teen brain. “Video games, better than anything else in our culture, deliver rewards to people, especially teenage boys,” says Fogg. “Teenage boys are wired to seek competency. To master our world and get better at stuff. Video games, in dishing out rewards, can convey to people that their competency is growing, you can get better at something second by second.” And it’s persuasive design that’s helped convince this generation of boys they are gaining “competency” by spending countless hours on game sites, when the sad reality is they are locked away in their rooms gaming, ignoring school, and not developing the real-world competencies that colleges and employers demand.
Likewise, social media companies use persuasive design to prey on the age-appropriate desire for preteen and teen kids, especially girls, to be socially successful. This drive is built into our DNA, since real-world relational skills have fostered human evolution. The Huffington Post article, “What Really Happens On a Teen Girl’s iPhone” describes the life of 14-year-old Casey from Millburn, New Jersey. With 580 friends on Instagram and 1,110 on Facebook, she’s preoccupied with the number of “likes” her Facebook profile picture receives compared with her peers. As she says, “If you don’t get 100 ‘likes,’ you make other people share it so you get 100…. Or else you just get upset. Everyone wants to get the most ‘likes.’ It’s like a popularity contest.”
Article author Bianca Bosker says that there are costs to Casey’s phone obsession, noting that the “girl’s phone, be it Facebook, Instagram or iMessage, is constantly pulling her away from her homework, sleep, or conversations with her family.” Casey says she wishes she could put her phone down. But she can’t. “I’ll wake up in the morning and go on Facebook just… because,” she says. “It’s not like I want to or I don’t. I just go on it. I’m, like, forced to. I don’t know why. I need to. Facebook takes up my whole life.”
Important Questions Are Simply Not Asked
B.J. Fogg may not be a household name, but Fortune Magazine calls him a “New Guru You Should Know,” and his research is driving a worldwide legion of user experience (UX) designers who utilize and expand upon his models of persuasive design. As Forbes Magazine writer Anthony Wing Kosner notes, “No one has perhaps been as influential on the current generation of user experience (UX) designers as Stanford researcher B.J. Fogg.”
UX designers come from many disciplines, including psychology as well as brain and computer sciences. However, the core of some UX research is about using psychology to take advantage of our human vulnerabilities. That’s particularly pernicious when the targets are children. As Fogg is quoted in Kosner’s Forbes article, “Facebook, Twitter, Google, you name it, these companies have been using computers to influence our behavior.” However, the driving force behind behavior change isn’t computers. “The missing link isn’t the technology, it’s psychology,” says Fogg.
UX researchers not only often follow Fogg’s design model, but some may also share his apparent tendency to overlook the broader implications of persuasive design. They focus on the task at hand, building digital machines and apps that better demand users’ attention, compel users to return again and again, and grow businesses’ bottom line. Less considered can be how the world’s children are affected by thousands of UX designers working simultaneously to pull them onto a multitude of digital devices and products at the expense of real life.
According to B.J. Fogg, the “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers. Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.” Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.” Hence, social networks are designed for ease of use. Finally, Fogg says that potential users need to be triggered to use a site. This is accomplished by a myriad of digital tricks, including the sending of incessant notifications urging users to view friends’ pictures, telling them they are missing out while not on the social network, or suggesting that they check — yet again — to see if anyone liked their post or photo.
Fogg’s formula is the blueprint for building multibillion dollar social media and gaming companies. However, moral questions about the impact of turning persuasive techniques on children and teens are not being asked. For example, should the fear of social rejection be used to compel kids to compulsively use social media? Is it okay to lure kids away from school tasks that demand a strong mental effort so they can spend their lives on social networks or playing video games that don’t make them think much at all? And is it okay to incessantly trigger kids to use revenue-producing digital products at the expense of engaging with family and other important real-life activities?
Brain Hacking
Persuasive technologies work because of their apparent triggering of the release of dopamine, a powerful neurotransmitter involved in reward, attention, and addiction. In the Venice region of Los Angeles, now dubbed “Silicon Beach,” the startup Dopamine Labs boasts about its use of persuasive techniques to increase profits: “Connect your app to our Persuasive AI [Artificial Intelligence] and lift your engagement and revenue up to 30% by giving your users our perfect bursts of dopamine,” and “A burst of Dopamine doesn’t just feel good: it’s proven to re-wire user behavior and habits.”
Ramsay Brown, the founder of Dopamine Labs, says in a KQED Science article, “We have now developed a rigorous technology of the human mind, and that is both exciting and terrifying. We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behavior in ways that, unbeknownst to them, feel second-nature but are really by design.” Programmers call this “brain hacking,” as it compels users to spend more time on sites even though they mistakenly believe it’s strictly due to their own conscious choices.
Social networks and video games use the trusted brain-manipulation technique of variable reward (think slot machine). Users never know when they will get the next “like” or game reward, and it’s delivered at the perfect time to foster maximal stimulation and keep them on the site. Banks of computers employ AI to “learn” which of a countless number of persuasive design elements will keep users hooked. A persuasion profile of a particular user’s unique vulnerabilities is developed in real time and exploited to keep users on the site and make them return again and again for longer periods of time. This drives up profits for consumer internet companies whose revenue is based on how much their products are used.
Clandestine techniques that manipulate users to fulfill a profit motive are regarded by programmers as “dark design.” Why would firms resort to such tactics? As former tech executive Bill Davidow says in his Atlantic article“Exploiting the Neuroscience of Internet Addiction,” “The leaders of Internet companies face an interesting, if also morally questionable, imperative: either they hijack neuroscience to gain market share and make large profits, or they let competitors do that and run away with the market.”
There are few industries as cutthroat and unregulated as Silicon Valley. Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival. Children’s well-being is not part of the decision calculus.
A Peek Behind the Curtain
While social media and video game companies have been surprisingly successful at hiding their use of persuasive design from the public, one breakthrough occurred in 2017 when Facebook documents were leaked to The Australian. The internal report crafted by Facebook executives showed the social network boasting to advertisers that by monitoring posts, interactions, and photos in real time, the network is able to track when teens feel “insecure,” “worthless,” “stressed,” “useless” and a “failure.” Why would the social network do this? The report also bragged about Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.”
Persuasive technology’s use of digital media to target children, deploying the weapon of psychological manipulation at just the right moment, is what makes it so powerful. These design techniques provide tech corporations a window into kids’ hearts and minds to measure their particular vulnerabilities, which can then be used to control their behavior as consumers. This isn’t some strange future… this is now. Facebook claimed the leaked report was misrepresented in the press. But when child advocates called on the social network to release it, the company refused to do so, preferring to keep the techniques it uses to influence kids shrouded in secrecy.
Digital Pied Pipers
The official tech industry line is that persuasive technologies are used to make products more engaging and enjoyable. But the revelations of industry insiders can reveal darker motives. Video game developer John Hopson, who has a Ph.D. in behavioral and brain science, wrote the paper “Behavioral Game Design.” He describes the use of design features to alter video game player behavior, sounding much like an experimenter running lab animals through their paces, answering questions such as: “How do we make players maintain a high, consistent rate of activity?” and “How to make players play forever.”
Revealing the hard science behind persuasive technology, Hopson says, “This is not to say that players are the same as rats, but that there are general rules of learning which apply equally to both.” After penning the paper, Hopson was hired by Microsoft, where he helped lead the development of the Xbox Live, Microsoft’s online gaming system. He also assisted in the development of Xbox games popular with kids, including those in the Halo series. The parents I work with simply have no idea about the immense amount of financial and psychological firepower aimed at their children to keep them playing video games “forever.”
Another persuasive technology expert is Bill Fulton, a game designer who trained in cognitive and quantitative psychology. He started Microsoft’s Games User-Research group before founding his own consulting agency. Fulton is transparent about the power of persuasive design and the intent of the gaming industry, disclosing in Big Four Accounting Firm PwC’s tech business journal: “If game designers are going to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can.”
This is a major effect of persuasive design today: building video games and social media products so compelling that they pull users away from the real world to spend their lives in for-profit domains. But to engage in a pursuit at the expense of important real-world activities is a core element of addiction. And there is increasing evidence that persuasive design has now become so potent that it is capable of contributing to video game and internet addictions — diagnoses that are officially recognized in China, South Korea, and Japan, and which are under consideration in the U.S.
Not only does persuasive design appear to drive kids’ addictions to devices, but knowledge of addiction is used to make persuasive design more effective at hijacking the mind. As Dopamine Labs’ Ramsay Brown acknowledges in an episode of CBS’s 60 Minutes, “Since we’ve figured to some extent how these pieces of the brain that handle addiction are working, people have figured out how to juice them further and how to bake that information into apps.”
Stealing from Childhood
The creation of digital products with drug-like effects that are able to “pull a person away” from engaging in real-life activities is the reason why persuasive technology is profoundly destructive. Today, persuasive design is likely distracting adults from driving safely, productive work, and engaging with their own children — all matters which need urgent attention. Still, because the child and adolescent brain is more easily controlled than the adult mind, the use of persuasive design is having a much more hurtful impact on kids.
Persuasive technologies are reshaping childhood, luring kids away from family and schoolwork to spend more and more of their lives sitting before screens and phones. According to a Kaiser Family Foundation report, younger U.S. children now spend 5 ½ hours each day with entertainment technologies, including video games, social media, and online videos. Even more, the average teen now spends an incredible 8 hours each day playing with screens and phones. Productive uses of technology — where persuasive design is much less a factor — are almost an afterthought, as U.S. kids only spend 16 minutes each day using the computer at home for school.
Quietly, using screens and phones for entertainment has become thedominant activity of childhood. Younger kids spend more time engaging with entertainment screens than they do in school, and teens spend even more time playing with screens and phones than they do sleeping. The result is apparent in restaurants, the car sitting next to you at the stoplight, and even many classrooms: Attesting to the success of persuasive technology, kids are so taken with their phones and other devices that they have turned their backs to the world around them. Hiding in bedrooms on devices, or consumed by their phones in the presence of family, many children are missing out on real-life engagement with family and school — the two cornerstones of childhood that lead them to grow up happy and successful. Even during the few moments kids have away from their devices, they are often preoccupied with one thought: getting back on them.
In addition to the displacement of healthy childhood activities, persuasive technologies are pulling kids into often toxic digital environments. A too frequent experience for many is being cyberbullied, which increases their risk of skipping school and considering suicide. And there is growing recognition of the negative impact of FOMO, or the fear of missing out, as kids spend their social media lives watching a parade of peers who look to be having a great time without them, feeding their feelings of loneliness and being less than.
A Wired Generation Falling Apart
The combined effects of the displacement of vital childhood activities and exposure to unhealthy online environments is wrecking a generation. In her recent Atlantic article, “Have Smartphones Destroyed a Generation?,” Dr. Jean Twenge, a professor of psychology at San Diego State University, describes how long hours spent on smartphones and social media are driving teen girls in the U.S. to experience high rates of depression and suicidal behaviors.
And as the typical age when kids get their first smartphone has fallen to 10, it’s no surprise to see serious psychiatric problems — once the domain of teens — now enveloping young kids. Self-inflicted injuries, such as cutting, that are serious enough to require treatment in an emergency room, have increased dramatically in 10- to 14-year-old girls, up 19% per year since 2009.
While girls are pulled onto smartphones and social media, boys are more likely to be seduced into the world of video gaming, often at the expense of a focus on school. High amounts of gaming are linked to lower grades, so with boys gaming more than girls, it’s no surprise to see this generation of boys struggling to make it to college: a full 57% of college admissions are granted to young women compared with only 43% to young men. And, as boys transition to manhood, they can’t shake their gaming habits. Economists working with the National Bureau of Economic Research recently demonstrated how many young U.S. men are choosing to play video games rather than join the workforce.
As a child and adolescent psychologist myself, the inevitable conclusion is both embarrassing and heartbreaking. The destructive forces of psychology deployed by the tech industry are making a greater impact on kids than the positive uses of psychology by mental health providers and child advocates. Put plainly, the science of psychology is hurting kids more than helping them.
The Awakening
Hope for this wired generation has seemed dim until recently, when a surprising group has come forward to criticize the tech industry’s use of psychological manipulation: tech executives. Tristan Harris, formerly a design ethicist at Google, has led the way by unmasking the industry’s use of persuasive design. Interviewed in The Economist’s 1843 magazine, he says, “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”
Another tech exec raising red flags about his tech industry’s use of mind manipulation is former Facebook president Sean Parker. Interviewed in Axios, he discloses: “The thought process that went into building these applications, Facebook being the first of them… was all about: ‘How do we consume as much of your time and conscious attention as possible?’” He also said that Facebook exploits “vulnerability in human psychology” and remarked, “God only knows what it’s doing to our children’s brains.”
A theme advanced by these tech execs is that the industry is unfairly using persuasive technology to gain a profit advantage. “Consumer internet businesses are about exploiting psychology,” Chamath Palihapitiya, a former Facebook VP says in a talk ironically given at B.J. Fogg’s Stanford University. “We want to psychologically figure out how to manipulate you as fast as possible and then give you back that dopamine hit.”
Having children of their own can change tech execs’ perspective. Tony Fadell, formerly at Apple, is considered the father of the iPad and also of much of the iPhone. He is also the founder and current CEO of Nest. “A lot of the designers and coders who were in their 20s when we were creating these things didn’t have kids. Now they have kids,” Fadell remarks, while speaking at the Design Museum in London. “And they see what’s going on, and they say, ‘Wait a second.’ And they start to rethink their design decisions.”
Marc Benioff, CEO of the cloud computing company Salesforce, is one of the voices calling for the regulation of social media companies because of their potential to addict children. He says that just as the cigarette industry has been regulated, so too should social media companies. “I think that, for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive, and we need to rein that back as much as possible,” Benioff told CNBC in January, 2018, while in Davos, Switzerland, site of the World Economic Forum.
Benioff says that parents should do their part to limit their kids’ devices, yet expressed, “If there’s an unfair advantage or things that are out there that are not understood by parents, then the government’s got to come forward and illuminate that.” Since millions of parents, for example the parents of my patient Kelly, have absolutely no idea that devices are used to hijack their children’s minds and lives, regulation of such practices is the right thing to do.
Another improbable group to speak out on behalf of children is tech investors. Major Apple stockholders — the hedge fund Jana Partners and California State Teachers’ Retirement System, which collectively own $2 billion in the firm’s stock — have recently raised concerns that persuasive design is contributing to kids’ suffering. In an open letter to Apple, the investors, teaming up with leading child technology experts, detailed evidence that kids’ overuse of phones and devices is leading to their increased risk of depression and suicide risk factors. Specifically calling out the destructive impact of persuasive technology, the letter reads: “It is also no secret that social media sites and applications for which the iPhone and iPad are a primary gateway are usually designed to be as addictive and time-consuming as possible.”
Going Lower
How has the consumer tech industry responded to these calls for change? By going even lower. Facebook recently launched Messenger Kids, a social media app that will reach kids as young as five years old. Suggestive that harmful persuasive design is now honing in on very young children is the declarationof Messenger Kids Art Director, Shiu Pei Luu, “We want to help foster communication [on Facebook] and make that the most exciting thing you want to be doing.”
Facebook’s narrow-minded vision of childhood is reflective of how out of touch the social network and other consumer tech companies are with the needs of an increasingly troubled generation. The most “exciting thing” for young children should be spending time with family, playing outside, engaging in creative play, and other vital developmental experiences — not being drawn into the social media vortex on phones or tablets. Moreover, Facebook Messenger Kids is giving an early start to the wired life on social media that we know poses risks of depression and suicide-related behavior for older children.
In response to the release of Facebook’s Messenger Kids, the Campaign for a Commercial-Free Childhood (CCFC) sent Facebook a letter signed by numerous health advocates calling on the company to pull the plug on the app. Facebook has yet to respond to the letter and instead continues to aggressively market Messenger Kids for young children.
The Silence of a Profession
While tech execs and investors are speaking out against the tech industry’s psychological manipulation of children, the American Psychological Association (APA) — which is tasked with protecting children and families from harmful psychological practices — has been essentially silent on the matter. This is not suggestive of malice; instead, the APA leadership — much like parents — is likely unaware of the tech industry’s distorted use of psychology. Nonetheless, there is irony, as psychologists and their powerful tools are guided by ethics, while tech execs and investors are not.
The Ethics Code of the APA, U.S psychology’s chief professional organization, is quite clear: “Psychologists strive to benefit those with whom they work and take care to do no harm.” Moreover, APA Ethical Standards require the profession to make efforts to correct the “misuse” of the work of psychologists, which would include the application of B.J. Fogg’s persuasive technologies to influence children against their best interests. The code even provides special protection to kids because their developmental “vulnerabilities impair autonomous decision making.”
Manipulating children for profit without their own or parents’ consent, and driving kids to spend more time on devices that contribute to emotional and academic problems is the embodiment of unethical psychological practice. Silicon Valley corporations and the investment firms that support them are heavily populated by highly privileged white men who use concealed mind-bending techniques to control the lives of defenseless kids. Addressing this inequity is Tristan Harris, who says, “Never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now [Silicon Valley], had control of what a billion people think and do.” Harris was recounting an excerpt of a presentation he made while at Google during an interview with journalist Kara Swisher for Recode Decode in February of 2017.
Some may argue that it’s the parents’ responsibility to protect their children from tech industry deception. However, parents have no idea of the powerful forces aligned against them, nor do they know how technologies are developed with drug-like effects to capture kids’ minds. Parents simply can’t protect their children or teens from something that’s concealed and unknown to them.
Others will claim that nothing should be done because the intention behind persuasive design is to build better products, not manipulate kids. In fact, for those working in the user experience and persuasion fields, I’m sure there is no intent to harm children. The negative consequences of persuasive technology have been for the most part accidental, an unfortunate byproduct of an exceptionally competitive design process. However, similar circumstances exist in the cigarette industry, as tobacco companies have as their intention profiting from the sale of their product, not hurting children. Nonetheless, because cigarettes and persuasive design predictably harm children, actions should be taken to protect kids from their effects.
A Conscience in an Age of Machines
Since its inception, the field of persuasive technology has operated in a moral vacuum. The resulting tragedy is not surprising.
In truth, the harmful potential of using persuasive design has long been recognized. Fogg, himself, says in a 1999 journal article, “Persuasive computers can also be used for destructive purposes; the dark side of changing attitudes and behaviors leads toward manipulation and coercion.” And in a 1998 academic paper, Fogg describes what should happen if things go wrong, saying, if persuasive technologies are “deemed harmful or questionable in some regard, a researcher should then either take social action or advocate that others do so.”
More recently, Fogg has actually acknowledged the ill effects of persuasive design. Interviewed by Ian Leslie in 2016 for The Economist’s 1843 Magazine, Fogg says, “I look at some of my former students and I wonder if they’re really trying to make the world better, or just make money.” And in 2017 when Fogg was interviewed by 032c Magazine, he acknowledged, “You look around the restaurants and pretty much everyone has their phone on the table and they’re just being constantly drawn away from the live face-to-face interaction — I do think that’s a bad thing.” Nonetheless, Fogg hasn’t taken meaningful action to help those hurt by the field he fathered. Nor have those in positions of power, with the recent exception of tech execs coming forward, done anything to limit the manipulative and coercive use of digital machines against children and teens.
So, how can children be protected from the tech industry’s use of persuasive design? I suggest turning to President John F. Kennedy’s prescient guidance: He said that technology “has no conscience of its own. Whether it will become a force for good or ill depends on man.” I believe that the psychology profession, with its understanding of the mind and ethics code as guidance, can step forward to become a conscience guiding how tech corporations interact with children and teens.
The APA should begin by demanding that the tech industry’s behavioral manipulation techniques be brought out of the shadows and exposed to the light of public awareness. Changes should be made in the APA’s Ethics Code to specifically prevent psychologists from manipulating children using digital machines, especially if such influence is known to pose risks to their well-being. Moreover, the APA should follow its Ethical Standards by making strong efforts to correct the misuse of psychological persuasion by the tech industry and by user experience designers outside the field of psychology.
There is more the psychology profession can and should do to protect children and rectify the harm being done to kids. It should join with tech executives who are demanding that persuasive design in kids’ tech products be regulated. The APA also should make its powerful voice heard amongst the growing chorus calling out tech companies that intentionally exploit children’s vulnerabilities. And the APA must make stronger and bolder efforts to educate parents, schools, and fellow child advocates about the harms of kids’ overuse of digital devices.
With each passing day, new and more influential persuasive technologies are being deployed to better take advantage of children’s and teens’ inherent limitations. The psychology profession must insist in this new age that its tools be used to improve rather than hinder children’s health and well-being. By making a strong statement against the exploitive use of persuasive design, the APA and the psychology profession can help provide the conscience needed to guide us in this age of dangerously powerful digital machines.
1 note · View note
dareread · 7 years ago
Link
Exactly 1,300 years ago, in the year 718, a little-remembered kingdom was born in Spain. It soon led to the liberation of the Iberian Peninsula from Islamic occupation. To appreciate the significance of that development, we must travel back seven years earlier, to 711, when Arabs and Africans, both under the banner of Islam, “godlessly invaded Spain to destroy it,” to quote from the Chronicle of 754. Once on European soil, they “ruined beautiful cities, burning them with fire; condemned lords and powerful men to the cross; and butchered youths and infants with the sword.”
After meeting and beating Spain’s Visigothic nobles at the pivotal Battle of Guadalete — “never was there in the West a more bloody battle than this,” wrote the Muslim chronicler al-Hakam, “for the Muslims did not withdraw their scimitars from them [Christians] for three days” — the invaders continued to penetrate northward into Spain, “not passing a place without reducing it, and getting possession of its wealth, for Allah Almighty had struck with terror the hearts of the infidels.”
Such terrorism was intentionally cultivated, in keeping with the Koran (3:151, 8:12, etc.). For instance, the invaders slaughtered, cooked, and pretended to eat Christian captives, while releasing others who, horrified, fled and “informed the people of Andalus [Spain] that the Muslims feed on human flesh,” thereby “contributing in no small degree to increase the panic of the infidels,” wrote al-Maqqari, another Muslim chronicler.
Contrary to the claim that Spain capitulated easily, that it reasoned that Muslim rule was no worse and possibly more lenient than that of the Visigoths, even Muslim chroniclers note how “the Christians defended themselves with the utmost vigor and resolution, and great was the havoc that they made in the ranks of the faithful.” In Córdoba, for example, a number of leading Visigoths and their people holed themselves up in a church. Although “the besieged had no hopes of deliverance, they were so obstinate that when safety was offered to them on condition either of embracing Islam, or paying jizya, they refused to surrender, and the church being set on fire, they all perished in the flames,” wrote al-Maqqari, adding that the ruins of this church became a place of “great veneration” for later generations of Spaniards because “of the courage and endurance displayed in the cause of their religion by the people who died in it.”
In the end, native Spaniards had two choices: acquiesce to Muslim rule or “flee to the mountains, where they risked hunger and various forms of death,” according to an early Christian chronicler.
Pelagius, better known as Pelayo (685–737), a relative of and “sword-bearer” to King Roderick, who survived Guadalete, followed both strategies. After the battle, he retreated north, where Muslim rule was still tenuous, but eventually consented to become a vassal of Munnuza, a local Muslim chief. Through some “stratagem,” Munnuza “married” Pelayo’s sister — a matter that the sword-bearer “by no means consented to,” according to the Chronicle of Alfonso III. Having expressed displeasure at the seizure of his sister, and having ceased paying jizya (tribute), Muslims were sent “to apprehend him treacherously” and bring him back “bound in chains.” Unable to fight the oncoming throng of Arabs and Africans “because they were so numerous,” Pelayo “climbed a mountain” and “joined himself to as many people as he found hastening to assemble.”
There, in the deepest recesses of the Asturian mountains — the only free spot left in the Iberian Peninsula — the assembled Christian fugitives declared Pelayo to be their new king. Thus the Kingdom of Asturias was born in 718.
In the Chronicle of Alfonso III, we have possibly the oldest record of the two sorts of Christians that developed under Muslim-occupied Spain: those who defied Islam and fled to the Asturian wilds, and those who accepted their lot and maneuvered within the system as subjugated dhimmis.
“Hearing this, the king [the Muslim governor of Córdoba], moved by an insane fury, ordered a very large army from all over Spain to go forth” and bring the infidel rebels to heel. The invaders — 180,000 of them, if the chroniclers are to be believed — surrounded Pelayo’s mountain. They sent Oppa, a bishop or nobleman who had acquiesced to Muslim rule, to reason with him at the mouth of a deep cavern: “If when the entire army of the Goths was assembled, it was unable to sustain the attack of the Ishmaelites [at Guadalete], how much better will you be able to defend yourself on this mountaintop? To me it seems difficult. Rather, heed my warning and recall your soul from this decision, so that you may take advantage of many good things and enjoy the partnership of the Chaldeans [Arabs].”
“I will not associate with the Arabs in friendship nor will I submit to their authority,” Pelayo responded. Then the rebel made a prophecy that would be fulfilled over the course of nearly eight centuries: “Have you not read in the divine scriptures [e.g., Mark 4:30-21] that the church of God is compared to a mustard seed and that it will be raised up again through divine mercy?”
Oppa affirmed that it was so. The fugitive continued: “Christ is our hope that through this little mountain, which you see, the well-being of Spain and the army of the Gothic people will be restored. . . . Now, therefore, trusting in the mercy of Jesus Christ, I despise this multitude and am not afraid of it. As for the battle with which you threaten us, we have for ourselves an advocate in the presence of the Father, that is, the Lord Jesus Christ, who is capable of liberating us from these few.” (Here, in the Chronicle of Alfonso III, we have possibly the oldest record of the two sorts of Christians that developed under Muslim-occupied Spain: those who defied Islam and fled to the Asturian wilds, and those who accepted their lot and maneuvered within the system as subjugated dhimmis — and grumbled against their northern coreligionists for bringing Islam’s ire against them. The two will meet and compete again in centuries to come.)
There, at Covadonga — meaning “Cavern of the Lady” — battle commenced in the summer of 722. A shower of rocks rained down on the Muslims in the narrow passes, where their numbers counted for nothing and only caused confusion. Afterward, Pelayo and his band of rebels rushed forth from their caves and hiding places and made great slaughter among them; those who fled the carnage were tracked and mowed down by other, now emboldened, mountaineers. “A decisive blow was dealt at the Moorish power,” a 19th-century historian wrote. “The advancing tide of conquest was stemmed. The Spaniards gathered heart and hope in their darkest hour; and the dream of Moslem invincibility was broken.”
According to Reconquista historian Joseph O’Callaghan, “Covadonga became the symbol of Christian resistance to Islam and a source of inspiration to those who, in words attributed to Pelayo, would achieve the salus Spanie, the salvation of Spain.”
Several subsequent Muslim attempts, including three major campaigns, were made to conquer the Asturian kingdom, and the “Christians of the North scarcely knew the meaning of repose, security, or any of the amenities of life,” historian Louis Bertrand observed. Constant jihad raids created a wild frontier zone roughly along the Duero River; this became “a territory where one [a Muslim] fights for the faith,” one medieval Muslim wrote. As the great Ibn Khaldun affirmed, every Muslim ruler of Andalusia was obligated “to wage the jihad himself and with his armies at least once a year.”
The Muslims intentionally devastated the region — they later dubbed it “the Great Desert” —  between them and Asturias. Bertrand elaborates:
To keep the [northern] Christians in their place it did not suffice to surround them with a zone of famine and destruction. It was necessary also to go and sow terror and massacre among them. Twice a year, in spring and autumn, an army sallied forth from Córdoba to go and raid the Christians, destroy their villages, their fortified posts, their monasteries and their churches, except when it was a question of expeditions of larger scope, involving sieges and pitched battles. In cases of simply punitive expeditions, the soldiers of the Caliph confined themselves to destroying harvests and cutting down trees. . . . If one bears in mind that this brigandage was almost continual, and that this fury of destruction and extermination was regarded as a work of piety — it was a holy war against infidels — it is not surprising that whole regions of Spain should have been made irremediably sterile. This was one of the capital causes of the deforestation from which the Peninsula still suffers. With what savage satisfaction and in what pious accents do the Arab annalists tell us of those at least biennial raids. A typical phrase for praising the devotion of a Caliph is this: “He penetrated into Christian territory, where he wrought devastation, devoted himself to pillage, and took prisoners.” . . . At the same time as they were devastated, whole regions were depopulated. . . . The prolonged presence of the Muslims, therefore, was a calamity for this unhappy country of Spain. By their system of continual raids they kept her for centuries in a condition of brigandage and devastation.
Even so, the mustard seed would not perish. “A vital spark was still alive,” Edward Gibbon wrote; “some invincible fugitives preferred a life of poverty and freedom in the Asturian valleys; the hardy mountaineers repulsed the slaves of the caliph.” Moreover, “all who were dissatisfied with Moorish dominion, all who clung to the hope of a Christian revival, all who detested Mahomet,” were drawn to the life of poverty and freedom, as 19th-century historian Henry Edward Watts put it. By the mid eighth century, the “vital spark” had spread to engulf the entire northwest of the Peninsula.
Over the next three centuries, a number of Christian kingdoms — Galicia, Leon, Castile, Navarre, Aragon, and Catalonia, whose significance and names morphed and changed with the vicissitudes of history — evolved from or alongside the Asturian mustard seed. They made slow but steady progress against the forces of Islam.
Finally, in 1085, and after nearly 400 years of Muslim occupation, the Christians recaptured the ancient Visigothic capital, Toledo. Over the next century, not one but two massive new invasions came from Africa, the first under the Almoravids, the second under the Almohads. Both were committed to the jihad (in ways that would make the Islamic State appear half-hearted). A tug of war between Christians and Muslims ensued until 1212, when the two forces met at the highly decisive battle of Las Navas de Tolosa. Victory went to the Christians: One by one, long-held Muslim cities were liberated by the victors: Córdoba, for centuries the capital of Muslim Spain, in 1236; Valencia in 1238; and Seville in 1248.
Just as Muslims had for centuries “purified” captured Christian towns and churches “from the filth of idolatry and . . . from the stains of infidelity and polytheism,” so now, tit for tat, Christian conquerors and clergymen engaged in elaborate ceremonies whereby mosques and cities were “cleansed of the filthiness of Muhammad” — a ubiquitous phrase in the chronicles of the conquest of Muslim cities conquered — even as Muslim accounts lament over “dwellings emptied of Islam” and over “mosques . . . wherein only bells and crosses may [now] be found.”
Only the remote Muslim kingdom of Granada, at the very southern tip of the Peninsula, remained. Surrounded by mountainous terrain and with the sea behind it, Granada was well fortified, inaccessible, and isolated from the rest of Iberia. Moreover, Christian infighting habitually flared out, as Castile, Aragon, and Portugal increasingly jockeyed for power.
On Christmas Day in 1481, Granadan Muslims stormed a nearby Christian fortress and slaughtered all present. King Ferdinand and Queen Isabella declared war, so that “Christendom might be delivered from this continued threat at the gates,” as they explained, and “these infidels of the kingdom of Granada [might be] ejected and expelled from Spain” once and for all. After a decade of military campaigns and sieges, Granada finally surrendered, on January 2, 1492.
“After so much labor, expense, death, and shedding of blood,” sang the monarchs, “this kingdom of Granada, which was occupied for over seven hundred and eighty years by the infidels,” had been liberated. And it all came to pass thanks to Pelayo’s Asturian mustard seed, planted 1,300 years ago this year.
1 note · View note
dareread · 7 years ago
Link
In The Consolation of Philosophy, the 5th century Greek scholar and Roman Consul Boethius wrote: “Compare the length of a moment with the period of ten thousand years; the first, however miniscule, does exist as a fraction of a second. But that number of years, or any multiple of it that you may name, cannot even be compared with a limitless extent of time, the reason being that comparisons can be drawn between finite things, but not between finite and infinite.”
Boethius’ insight into the nature of asymmetrical comparison is perennially valid, whether with respect to philosophical and theological speculation, mathematical equations involving infinities, or ideological aspects of political thought. It explains why communist, anarchist or socialist experiments in the life of peoples and nations are bound to fail, for as Boethius might have said, they do not treat of corresponding finite entities. In other words, these adventures in social perfectibility flow from the refusal to ground a vision of the future in historical and political reality.
In order to achieve the possible, it is necessary to acknowledge the real, that is, the limits set by the actual parameters of historical existence and the constraints of human nature. Otherwise we are on the way to creating a dystopian nightmare. One cannot validly compare the imperfect social and political structures of the past and present with a utopian construction that has never come to pass and which exists only in myth, dream and mere desire. No sound conclusion can emerge from such dissonant correlations. To strive, for example, to build an ideal society in which “equality of results” or “outcomes” -- what is called “social justice” -- is guaranteed can only produce a levelled-down caricature of human struggle and accomplishment. We have seen it happen time and again, and the consequences are never pretty.
The infatuation with “outcomes” in the sense of compelled equality persists wherever we may look, significantly in education, where equality of result is enforced under the tired mantra of “diversity and inclusion” -- standards are lowered, everyone is admitted, everyone graduates, everyone gets a trophy or a degree regardless of input, so that no one gets left behind. Mastering the curriculum, however, is a highly competitive venture, meant to sieve winners from losers; we recall the word derives from the Latin for “race course.” The “equality” compulsion is especially paramount in “social justice” legislation which ensures that unmotivated non-contributors to civil order, prosperity and disciplined excellence in any field of endeavor are treated as at least equal to and often favored over successful practitioners and genuine achievers.
There is another, perhaps more clinical, way of regarding the issue, known as the Pareto Principle, deriving from the work of Italian econo-sociologist Vilfredo Pareto(1848-1923.) The “equality” or “outcomes” obsession, as Jordan Peterson has pointed out with reference to Pareto, is a noxious delusion. The Pareto Principle specifies a scalene relationship between causes and effects in human endeavor. Also known as the 80/20 Rule, the principle postulates, as a matter of discernible fact, that 80% of a nation’s wealth is typically controlled by 20% of the population. It has almost always been so. (The Pareto calculus, it should be mentioned, has nothing to do with the urban legend of the greedy “one percent.” The wealthy already contribute disproportionately in terms of employment and taxes to the social leviathan.)
In an interesting aside, Peterson acknowledges that Marx was correct in observing that capital tends to accumulate in the hands of the few. But Marx erred in considering this imbalance a flaw in the capitalist system. For such asymmetry, as Pareto and others have shown, “is a feature of every single system of production that we know of.” Disproportion is intrinsic to human life, whether we like it or not. Moreover, the Rule applies not only to economic factors but to distributions inherent in almost all productive human efforts and enterprises. The potential for human achievement is never evenly distributed. True success in any creative endeavor is invariably a function of that small band of individuals who, as Peterson says, exemplify power, competence, authority and direction in their lives. Briefly, IQ and conscientiousness are the biggest predictors of success.
Although the Rule does not enjoy the status of a Law, it is for the most part reliable. In other words, no matter how we may tamper with distributive sequences, life is simply not fair. People are born with different aptitudes and are exposed to a variant range of formative experiences, giving rise to personal “outcomes” that cannot be preordained. At the same time, the sum of such particulars group into predictable aggregates which are statistically definitive.
Distributions of wealth, as Richard Koch explains in The 80/20 Principle, are “predictably unbalanced,” but the “data relating to things other than wealth or income” can be generalized, as noted, over the broad spectrum of human activities, pursuits and behavior: time-management, distance relations, crime distributions, artistic masterpieces and innumerable other phenomena. One-hundred percent of most things amenable to statistical calculation tend to happen, speaking metaphorically, within a 20% radius, including that which we consider best in life. Out of every 100 books published, to take one instance of how the Rule tends to operate, approximately 20 will have marketable success. It is thus to our advantage, Koch continues, to determine and isolate the 20% of time and effort which are most productive; the remaining 80% turns out to be dispensable.
Elaborating on the Rule with a view to furthering proficiency, engineer Joseph Moses Juran, the father of TQM (Total Quality Management), which revolutionized habits of thought in business, manufacturing and engineering, posited his “Rule of the Vital Few” in accounting for the disparity between inputs and outputs. As Koch puts it in his summary of Juran’s thesis: “For everyone and every institution, it is possible to obtain much that is of value and avoid what is of negative value” by understanding that evolving systems are nonlinear, that “equilibrium is illusory and fleeting,” that minorities are responsible for majority payoffs, and that focusing on the 80% at the expense of the 20% in any sphere of human activity will inevitably yield negative consequences. (Needless to say, the term “minorities” in the expository context alludes not to racial or gender minorities, but to a creative minimum.)
We are clearly indebted, as Nassim Nicholas Taleb stresses, Pareto-like, in his new book Skin in the Game: Hidden Asymmetries in Daily Life, to those who really do have skin in the game, who are “imbued with a sense of pride and honor,” who are “endowed with the spirit of risk taking,” and who “put their soul into something [without] leaving that stuff to someone else.” Taleb’s version of the “minority rule” is even more drastic than Pareto’s, reducing the 20% to “3 or 4 percent of the total population.” They are the “heroes” on whom the good of society depends.
This is another way of saying that we must invest in amortizing excellence by acknowledging our benefactors and by focusing on principles inherent in all distributions of effort, expense, and investment. It follows that success is possible only if we trade in what is actually there to work with, whether in the mind or in the world. You cannot bank on fiat currency, so to speak. And this is true of all personal, technical, scientific, professional and social projects.
Here is where Boethius and Pareto meet. In the political domain utopian theory proposes a radical transformation of society purportedly in the interests of the 80% who produce little with respect to innovation, personal risk, entrepreneurial investment of time and resources, scientific breakthroughs and intellectual advancement. And it does so at the expense of the 20% who are the engines of real prosperity, creative accomplishment and the expansion of the frontiers of knowledge. Its modus operandi is to compare what has never been observed except in literary fables and theoretical assumptions with the millennia of actual social practice and the gradual success of what Karl Popper in The Open Society and Its Enemies called “piecemeal social engineering.”  The grand collectivist program is unable to bridge the gap between the there and the not-there, faltering on incommensurables.
In short, socialism in all its forms is doomed to fail because it cannot comprehend that we live within the realm of the finite, as Boethius reminds us, and that excellence is rare, as Pareto and his followers persuasively re-affirm. When the twinned elements of finitude and acumen go unrecognized, mediocrity and failure ensue ineluctably. Individual talent, dedication to one’s work in the world in which we actually live, and intelligence in every department of life are qualities that must be preserved and promoted for their human uniqueness as well as for the benefit of the many. The end result of the veneration of purely notional and immaterial constructs together with the collective fetish of forced equality is, as history has repeatedly proven, economic stagnation, human misery and eventual collapse.
It may sound heartless, but the triumph of the unqualified spells the end of a nation’s -- indeed, of a civilization’s -- historical term. In the real world of ability and performance, skill and attainment, the race is always to the swift and Achilles will always outpace the tortoise -- Ecclesiastes, Aesop and social egalitarians notwithstanding. To rig the race for the advantage of the slow would defeat its purpose, leading to social stasis, personal ennui and lack of meaningful production across the entire sweep of human initiative. If this were the case, there would be no race.
1 note · View note
dareread · 8 years ago
Link
In the view of Universal Basic Income (UBI) advocates, substituting robots for human labor will not only free virtually all humans from working, it will also generate endless wealth because the robots will be doing almost all of the work.
To reach a more realistic understanding of the economics of robots, let’s return to author Peter Drucker’s maxim: enterprises don’t have profits, enterprises only have expenses. In other words, from the outside, it looks as if businesses generate profits as a matter of course.
Enterprises don’t have profits, enterprises only have expenses captures the core dynamic of all enterprises: the only reliable characteristic of enterprises, whether they are owned by the state, the workers or private investors, is that they have expenses.
Profits--needed to reinvest in the enterprise and build capital--can only be reaped if revenues exceed the costs of production, general overhead and debt service. Robots are complex machines that require substantial quantities of energy and resources to produce, program and maintain. As a result, they will never be super-cheap to manufacture, own or maintain.
Robots, and the ecosystem of software, engineering, spare parts, diagnostics, etc. needed to produce, power and maintain them, are a large capital and operational expense. The greater the complexity of the tasks the robot is designed to complete, the greater the complexity and cost of the robot.
Robots only make financial sense in a very narrow swath of commoditized production, or in situations such as war or hazardous rescue missions where cost is not the primary issue. Compare the following two tasks and the cost and complexity of the robots needed to complete them in a cost-effective manner.
Task one: move boxes around a warehouse with flat concrete floors and fixed shelving mounted with hundreds of sensors to guide robots.
Task two: navigate extremely rough and uneven terrain with no embedded sensors, dig deep holes in rocky soil, and plant a delicate seedling in each hole. Each hole must be selected by contextual criteria; there is no pre-set grid pattern to the planting.
The first task has all the features that make robots cost-effective: easily navigable flat floors, fixed, easily mapped structures embedded with multiple sensors, and a limited, easily programmable repertoire of physical movements: stock boxes on the shelving, retrieve boxes from the shelving. The compact working space makes it practical to reprogram, recharge and repair the robots; spare parts can be kept onsite, and so on.
The second task--one of the steps in restoring a habitat--has none of these features. The terrain is extremely uneven and challenging to navigate; the varied surfaces may be hazardous in non-obvious ways (prone to sliding, etc.); there are no embedded sensors to guide the robot; it’s difficult and costly to service the robots onsite, and the task is extremely contextual, requiring numerous judgments and decisions and a wide variety of physical steps, ranging from the arduous task of digging a hole in rocky ground to delicately handling fragile seedlings.
Exactly what sort of robot would be capable of completing these tasks without human guidance? A drone might be able to ferry the fragile seedlings, but any drone capable of landing and punching a hole in unforgiving ground would be very heavy. Combining these disparate skills in one or even multiple robots—the heavy work of digging a hole in rocky soil on uneven ground, embedding a fragile seedling in just the right amount of compost and then watering the seedling deeply enough to give it a chance to survive—would be technically challenging.
And what profit is there to be earned from this restoration a public-land habitat? Since the habitat is public commons, there is no customer base to sell high-margin products to. If the state is paying for the job, it chooses the vendor by competitive bidding.
Given the conditions, a vendor with human labor will likely be more reliable and cheaper, as this is the sort of work that humans are supremely adapted to perform efficiently. Given that restoring a habitat generates no profit, perhaps the work is done entirely by volunteers.
In any of these cases, a costly array of robots facing a daunting challenge that could cause multiple failures (robots sliding down the slope, seedlings crushed, too little compost, compost over-compressed, water didn’t soak in, etc.) is simply not cost-effective. 
You see the point: humans have few advantages in a concrete floor warehouse with fixed metal shelving. Robots have all the advantages in that carefully controlled environment performing repeatable, easily defined tasks. But in the wilds of a hillside jumble of rocks, fallen trees, etc., handling tasks that require accuracy, strength, judgment, contextual understanding and a delicate touch, humans have all the advantages.
In other words, robots are only cost-effective in the narrow niches of commoditized tasks: repeatable tasks that are easy to break down into programmable steps that can be performed in a controlled environment.
Those with little experience of actually manufacturing a robot may look at a multi-million dollar prototype performing some task (often under human guidance, which is carefully kept off-camera) and assume that robots will decline in price on the same trajectory as computer components.
But the geometric rise in computing power and the corresponding decline in the cost of processing and memory is not a model for real-world components such as robots, which will continue to be extraordinarily resource and energy-intensive even if microchips decline in cost.
Vehicles might be a more realistic example of the cost consequences of increasing complexity and the consumption of resources: vehicles haven’t declined in cost by 95% like memory chips; they’ve increased in cost.
Self-driving vehicles are another example of how commoditized automation can be profitable performing a commoditized task. First, roadways are smooth, easy to map and easy to embed with sensors. Second, vehicles are intrinsically complex and costly; the average price of a vehicle is around $40,000. The sensors, electronics, software and motors required to make a vehicle autonomous are a relatively modest percentage of the total cost of the vehicle. Third, manufacturing vehicles is a profitable venture with a large base of customers. Fourth, the actual tasks of driving—navigating streets, accelerating, braking, etc.--are relatively limited in number. In other words, driving is a commoditized task that lends itself to automation.
Once again, robots have multiple advantages in this commoditized task as they are not easily distracted, don’t get drunk, and they don’t fall asleep. Humans have few advantages in this environment. And as noted, manufacturing autonomous vehicles will likely be a highly profitable business for those who master the processes.
Since so much of the production of goods and services in the advanced economies is based on commoditized tasks, it’s easy to make the mistake of extending these very narrowly defined capabilities in profitable enterprises to the whole of human life. But as my example illustrates, a wide array of work doesn’t lend itself to cost-effective robots, as robots have few if any advantages in these environments, while humans are supremely adapted to doing these kinds of tasks.
In effect, proponents of Universal Basic Income (UBI) assume robots will be able to perform 95% of all human work, ignoring the limitations of cost: robots will only perform work that is profitable, and profitable work is a remarkably modest subset of all human labor.
But this is not the truly crushing limitation of robots; that limit is intrinsic to the economics of automation.
0 notes
dareread · 8 years ago
Link
Meet Scott Yenor.
Yenor is a mild-mannered, bald, bespectacled professor of political science at Boise State University, a college known more for its blue football field and run-and-gun offense than for its history of philosophical debate. Yenor’s intellectual credentials are spotless: He has never received complaints from students or faculty about his classes or his papers. He’s a teacher and a thinker by trade, fully tenured.
But Yenor, you see, is also the devil.
At least, that’s the new public perception of Yenor at at Boise State. That’s because Yenor published a report in 2016 with the Heritage Foundation titled, Sex, Gender, and the Origins of the Culture War. The central thesis of the piece was simple and rather uncontroversial in conservative circles: that radical feminism’s central argument decrying gender boundaries between the sexes as entirely socially constructed has led directly to transgenderism’s attacks on gender itself as a social construct. As a philosophical matter, this progression is self-evident. Yenor’s report was academically worded and rather abstruse at times, filled with paragraphs such as this one:
For Beauvoir, the common traits of “immanent” women result from pervasive social indoctrination or socialization. Beauvoir identifies how immanence is taught and reinforced in a thousand different ways. Society, for instance, prepares women to be passive and tender and men to take the initiative in sexual relations. Male initiative in sex is “an essential element” in patriarchy’s “general frame.”
Yenor later translated his extensive report into a shorter, less jargon-y article for Heritage’s Daily Signal, titled, “Transgender Activists Are Seeking to Undermine Parental Rights.”
Again, his contentions were not merely consistent with mainstream conservative thought—they were self-evident to those human beings with eyes and the capacity to read. (Ontario, to take just one example, has recently passed a bill that could plausibly be read to identify parental dissent from small children seeking transgender treatment as “child abuse.”) Yenor’s rather uncontroversial article was then posted at the Boise State Facebook page.
That’s when the trouble began.
Leftist students took note of Yenor’s perspective. And they seethed.
Actually, they did more than seethe: they complained, they demanded that the piece be taken down, and they insisted that Yenor had personally insulted them. All of this prompted the pusillanimous dean of the school, Corey Cook, to half-heartedly defend Yenor’s right to publish. But then Cook backtracked faster than Bobby Hull defending a breakaway, saying:
Our core values as a School include the statement that “collegiality, caring, tolerance, civility and respect of faculty, staff, students and our external partners are ways of embracing diverse backgrounds, traditions, ideas and experiences.” As has been pointed out by several people in their communications with me, the particular language employed in the piece is inconsistent with that value.
Cook didn’t say exactly why Yenor’s writings had violated this inconsistently-enforced value. In fact, Cook’s attacks on Yenor violated this value far more significantly than Yenor’s original writing. But as shoddy as this statement was, other leftist faculty members thought Cook didn’t go far enough—even though he had pledged to “begin reevaluating our approach to social media.”
And so a knight arose to challenge Yenor’s nefarious, patriarchal dragon: Francisco Salinas, a man with the Orwellian title “Director of Student Diversity and Inclusion.”
Salinas believes that diversity and inclusion do not include perspectives disapproved by Francisco Salinas. Thus, he took up his fiery pen and wrote a post on the school’s website dramatically titled “Connecting The Dots.” Salinas explained that the Yenor controversy had preceded white supremacist rally and murder in Charlottesville, Virginia by a day. This was not, Salinas concluded, a coincidence. “Their proximity in my attention,” Salinas wrote, “is no accident.” How so? Let’s let Salinas sally forth:
There is a direct line between these fear fueled conspiratorial theories and the resurrection of a violent ideology which sees the “other” as a direct threat to existence and therefore necessary to obliterate. It is not an absolute succession and it is not a line without potential breaks or interruptions. Not every person who agrees with Yenor’s piece is likely to become an espoused Neo-Nazi, but likely every Neo-Nazi would agree with the substance of Yenor’s piece.
And so Yenor went from mainstream conservative thinker to neo-Nazi in the blink of an eye. Not just in the mind of Salinas, mind you—but in the minds of Yenor’s fellow professors and members of the student body, too.
A flyer suddenly began appearing around campus, reading “YOU HAVE BLOOD ON YOUR HANDS SCOTT YENOR.” The faculty senate took up a measure that would initiate an investigation claiming that Yenor was guilty of some ethereal “misconduct.” Here’s what faculty senator Professor Royce Hutson wrote:
A large majority of the senators feel that the piece espouses deeply homophobic, trans-phobic,and misogynistic ideas. Additionally, some feel that the piece may be academically dubious to the point of misconduct. In response, the senate has created an ad hoc committee to draft a statement that repudiates the ideals expressed by Professor Yenor, without explicitly censuring Dr. Yenor, and reiterates the Senate's endorsement of the BSU's shared values as it relates to his piece.
Yenor was forced to hire an attorney. His fellow professors cast him out like a leper. In Yenor’s words, his colleagues engaged in “ritual condemnation and ostracization.”
If this reads more like a tragicomic Kafka novel than an honest discourse about ideas at one of our nation’s institutions of higher learning, that’s because it is. Except that it’s real: Yenor wanders the halls of an institution to which he has dedicated his life, condemned for a crime nobody will specify.
Unfortunately, Yenor’s experiences aren’t rare. Professors are now routinely hauled up before courts of inquisition in true revolutionary fashion for offenses contrived post facto for the sole purpose of ensnaring anyone who dissents from the current leftist orthodoxy.
Northwestern University’s Laura Kipnis—who isn’t even conservative—has been sucked into the maw of a Title IX case for having the temerity to write about “sexual paranoia” on campus and asking for evidence before condemning professors or students to the wilderness for mere allegations of sexual misconduct.
Professor Keith Fink found himself ousted from his part-time role at University of California at Los Angeles; Fink lectured on free speech and employment law from a conservative perspective. No real reason was given for UCLA’s failure to renew his contract.
Professor Bret Weinstein was forced to quit his position teaching at Evergreen State College after he refused to comply with a racist mob demanding that white professors not teach on a specified date.
Professor Nicholas Christakis resigned his administrative position at Yale’s Silliman College after he was abused by students who didn’t appreciate him telling them that they should get over their fears about diabolical Halloween costumes.
And people wonder why academia is leftist.
The suffocating leftism in American universities has arisen in large part because they are run by a self-perpetuating clique. To be excluded from such cliques can be professional suicide. And the price of admission is ideological conformity. Moreover, public pressure from students and outside media often prompts administrators to join in the chorus—better to be part of the mob baying for heads than to join a controversial thinker on the guillotine. The few conservative professors left tend to keep their heads down and pray for anonymity.
But that’s just the start of the problem. Decade after decade, the treatment of conservative professors has gotten worse as the leftist hegemony has grown stronger. And as older conservative professors have aged out of the population there are no sponsors for up-and-coming conservatives who want to join the professoriate.
As Yenor explains, “The process of getting a Ph.D. either makes conservatives into ‘careerists’—which means that they have to toe the line on sacred cows of the left—or conservatives at the undergraduate level see what academia would be for a career and decline to join.” So the self-perpetuating caste grows ever stronger. And louder. And more virulent. Anti-intellectual bullies like Francisco Salinas—enforcers of the revolution—exist on nearly every campus.
Conservatives tend to think that it can’t get much worse on campus. But it can. And it will.
The purge is on. When even Scott Yenor can’t be left alone in the middle of Idaho to write obvious truths about sexual politics, it’s a warning to every conservative professor in America that if they speak freely on intellectual matters they’re not doing their jobs—they’re risking their careers.
1 note · View note
dareread · 8 years ago
Link
Once upon a time, some conservatives used to call for the abolition of the U.S. Department of Education. Lamentably, conservatives today celebrate when a “free-market advocate” like multimillionaire Betsy DeVos is appointed U.S. Secretary of Education, and they get terribly excited when she speaks at conservative conferences.
Meanwhile, even while conservatives continue to pronounce their allegiance to their favorite mantra — “free enterprise, private property, limited government” — they continue to embrace not only public schooling itself but also their favorite public-schooling fix-it program, school vouchers.
Over the years, conservatives have developed various labels for their voucher program: a “free-market approach to education,” “free enterprise in education,” or “school choice.” They have chosen those labels to make themselves and their supporters feel good about supporting vouchers.
But the labeling has always been false and fraudulent. Vouchers are nothing more than a socialist program, no different in principle from public schooling itself.
The term “free enterprise” means a system in which a private enterprise is free of government control or interference. That’s what distinguishes it from a socialist system, which connotes government control and interference with the enterprise.
A voucher system entails the government taxing people and then using the money to provide vouchers to people, which they can then redeem at government-approved private schools.
Does that sound like a system that is free from government control or interference? In reality, it’s no different in principle from food stamps, farm subsidies, Social Security, or any other welfare-state program. The government is using force to take money from Peter and giving it to Paul. That’s not “free enterprise.” That’s the opposite of free enterprise.
Conservatives say that their voucher system is based on “choice” because the voucher provides recipients with “choices.” But doesn’t the same principle apply to recipients of food stamps, farm subsidies, Social Security, and other socialist programs? Sure, the recipient of the loot has more choices because he has more money at his disposal. But let’s not forget that the person from whom the money was forcibly taken has been deprived of choices. After all, after a robber commits his dirty deed, he too has more choices with the money he has acquired. His victim, on the other hand, has been deprived of choices.
In FFF’s first year of operation, 1990, I wrote an article entitled “Letting Go of Socialism,” in which I pointed out that school vouchers were just another socialist scheme, one that was intended to make public schooling work more efficiently.
Imagine my surprise to receive a critique from none other than Milton Friedman, the Nobel Prize-winning economist who is the father of the school voucher program. Friedman leveled his critique in a speech he delivered that was entitled “Say No to Intolerance,” in which he took to task such libertarian stalwarts as Ludwig von Mises and Ayn Rand for adhering to principle.
Interesting enough, Friedman’s speech was recently reprinted in an issue of the Hoover Digest, a premier conservative publication. You can read it here.
Friedman’s critique of my article was nice enough. First pointing out that FFF was doing “good work and making an impact,” he addressed my criticism of vouchers:
But am I a statist, as I have been labeled by a number of libertarians, because some thirty years ago I suggested the use of educational vouchers as a way of easing the transition? Is that, and I quote Hornberger again, “simply a futile attempt to make socialism work more efficiently”? I don’t believe it. I don’t believe that you can simply say what the ideal is. This is what I mean by the utopian strand in libertarianism. You can-not simply describe the utopian solution, and leave it to somebody else how we get from here to there. That’s not only a practical problem. It’s a problem of the responsibilities that we have.
With all due respect to a Nobel Prize winner and a true gentleman, Milton Friedman was wrong on education then, and conservatives who continue to support vouchers are wrong today.
Notice something important, a point that conservatives have long forgotten: Friedman justified vouchers as a way to get rid of public schooling. For him, vouchers were a “transition” device — i.e., a way to get from here to there, with “there” being the end of public schooling.
That’s not what conservatives say today. They justify vouchers by saying that they will improve, not destroy, the public-schooling system. I can’t help but wonder what Friedman would say about that if he were still alive, given that his support of vouchers was based on the notion that it would serve as a way to get rid of public schooling. Would he still support vouchers if he knew that they would save public schooling and make it more efficient?
Why did conservatives end up rejecting Friedman’s justification? They came to the realization that some people would be less likely to support vouchers if they were told that their real purpose was to destroy public schooling. Therefore, to get more people to support vouchers, conservatives shifted Friedman’s justification to the exact opposite of what Friedman was saying. Conservatives began telling people that vouchers, by providing “competition,” would improve the public-schooling system. In fact, voucher proponents today, when pressed, will openly tell people that they are opposed to abolishing public schooling but only want to make it better by providing people with the means (vouchers) to leave the public-schooling system.
Almost 30 years after Friedman leveled his critique at me, there is not one instance of where his system of school vouchers have served as a “transition” to educational liberty. Time has confirmed the point I pointed out almost three decades ago — that school vouchers, no matter how they are labeled, are nothing more than a socialistic program designed to make socialism (i.e., public schooling) work more efficiently.
Friedman and conservatives have been proven wrong on education. There is only one solution to the educational morass in which Americans find themselves: Separate school and state, just as our ancestors separated church and state. Repeal all school compulsory-attendance laws and school taxes and sell off the school buildings. End all government involvement in education, including licensing of schools. Establish a total free-market educational system.
0 notes
dareread · 8 years ago
Link
President Trump’s call for NFL players who take a knee during the national anthem to be fired was a troubling assault on free speech — and it put the league in an impossible position. 
Americans do not and should not worship idols. We do not and should not worship the flag. As a nation we stand in respect for the national anthem and stand in respect for the flag not simply because we were born here or because it’s our flag. We stand in respect because the flag represents a specific set of values and principles: that all men are created equal and that we are endowed with our Creator with certain unalienable rights.
These ideals were articulated in the Declaration of Independence, codified in the Constitution, and defended with the blood of patriots. Central to them is the First Amendment, the guarantee of free expression against government interference and government reprisal that has made the United States unique among the world’s great powers. Arguably, it is the single most important liberty of all, because it enables the defense of all the others: Without the right to speak freely we cannot even begin to point out offenses against the rest of the Constitution. Now, with that as a backdrop, which is the greater danger to the ideals embodied by the American flag, a few football players’ taking a knee at the national anthem or the most powerful man in the world’s demanding that they be fired and their livelihoods destroyed for engaging in speech he doesn’t like? 
As my colleague Jim Geraghty notes this morning, too many in our polarized nation have lately developed a disturbing habit of zealously defending the free speech of people they like while working overtime to find reasons to justify censoring their ideological enemies. How many leftists who were yelling “free speech” yesterday are only too happy to sic the government on the tiny few bakers or florists who don’t want to use their artistic talents to celebrate events they find offensive? How many progressives who celebrated the First Amendment on Sunday sympathize with college students who chant “speech is violence” and seek to block conservatives from college campuses? The hypocrisy runs the other way, too. I was startled to see many conservatives who decried Google’s termination of a young, dissenting software engineer work overtime yesterday to argue that Trump was somehow in the right. Yet Google is a private corporation and Trump is the most powerful government official in the land. The First Amendment applies to Trump, not Google, and his demands for reprisals are ultimately far more ominous, given his job, than even the actions of the largest corporations. Google, after all, has competitors. Google commands no police force. Everything it does is replaceable. 
In the space of less than 24 hours this weekend, the president of the United States did more to politicize sports than ESPN has done in a decade of biased, progressive programming. He singled out free speech he didn’t like, demanded that dissenters be fired, and then — when it became clear that private American citizens weren’t going to do what he demanded — he urged the economic boycott of their entire industry. 
He told his political opponents on the football field — men who have defined their lives and careers by their mental and physical toughness — to essentially, “Do what I say or lose your job.” In so doing, he put them in straits far more difficult to navigate than anything Colin Kaepernick has wrought: Stand and they are seen to obey a man who just abused his office, and millions of Americans will view them as a sellout not just to the political cause they love but also to the Constitution itself; kneel and they defy a rogue president, but millions of Americans will view them as disrespecting the nation itself to score political points against a president those Americans happen to like. 
At one stroke, thanks to an attempted vulgar display of strength, Trump changed the playing of the anthem and the display of the flag from a moment where all but the most radical Americans could unite to one where millions of well-meaning Americans could and did legitimately believe that the decision to kneel represented a defense of the ideals of the flag, not defiance of the nation they love. So, yes, I understand why they knelt. I understand why men who would never otherwise bring politics onto the playing field — and never had politicized sports before — felt that they could not be seen to comply with a demagogue’s demands. I understand why even owners who gave millions to Trump expressed solidarity with their players. I understand why even Trump supporters like Rex Ryan were appalled at the president’s actions. 
I fear that those who proclaimed yesterday’s events a “win” for the president — after all, many of the players were booed for their stance, and in American politics you generally don’t want to be seen as taking sides against the flag — are missing the forest for the trees. If we lose respect for the First Amendment, then politics becomes purely about power. If we no longer fight to secure the same rights for others that we demand for ourselves, we become more tribal, and America becomes less exceptional. 
I respect Pittsburgh Steelers left tackle (and former Army ranger) Alejandro Villanueva, who — alone among his teammates — came out of the locker room to stand for the pledge while the rest of his team remained off the field. I also respect players who reluctantly, but acting out of the conviction that they will not be bullied by the president, chose to kneel when they otherwise never would. I do not, however, respect the actions of Donald Trump. This weekend, he didn’t make America great. He made its politics worse. 
When the history of this unfortunate, polarized era of American life is written, whether a man stood or knelt will matter far less than the values we all lived by. Americans who actually defend the letter and spirit of the First Amendment will stand (or kneel) proudly in the history books. Those who seek to punish their political opponents’ speech, on the other hand, can stand or kneel as they wish — so long as they hang their heads in shame.
0 notes
dareread · 8 years ago
Link
The Western world recently celebrated the 25th anniversary of the collapse of the Soviet Union. The Communist dream turned oppressive totalitarian regime that denied my Jewish parents admission to medical school. Its remnants still plague my Ukrainian relatives to this day. Such a historical reminder comes at a turning point in our own country, with the Republican American Health Care Act (or some future revision of it) on its way to become the new law of the land.
President Obama’s health care legacy, which has taken center-stage among the political debates of the past decade, aimed to enhance efficiency through consolidation into Health Maintenance Organizations and Accountable Care Organizations. This, among other things, led to the creation of monopolistic health networks and the demise of private practices.
In the past few months, we have been inundated with the shortcomings of the Republican proposal, which unsuccessfully attempts to toe the line between maintaining entitlements and subsidies while balancing the budget and deregulating health care. The future of American medicine, for practitioners and patients, hangs in the balance.
Medicine Is an Economic Transaction
As the debate becomes bogged down with logistics and finances, and deception of the American public through fear-mongering and smoke-and-mirror promises runs rampant, both sides of the political aisle have stopped short of questioning the recent distortion of health care’s place in society.
Restoring medicine as an economic transaction rather than a right would allow physicians to better serve the individual and preserve the deep-rooted American freedom that makes our lives worth living in the first place.
Health legislators have historically put emphasis on population health metrics, such as life expectancy and obesity rates, as indicators of the quality of health care in our country. Many were outraged when in 2015, for the first time in decades, US life expectancies dropped.
Although potentially indicative of underlying problems, life expectancy statistics are not the be-all and end-all of a nation’s prosperity. By that logic, one would rather live in totalitarian Cuba (79.2) than the US (78.8). Or in theocratic Iran (75.5) rather than serene Fiji (69.9).
America has long been attractive to those fleeing oppression, like my parents, not for its life expectancy or even comforts, but for its commitment to the preservation of human freedom and economic opportunity. Life without freedom is slavery, but freedom comes without life expectancy guarantees.
Unshackle American Health Care
The core principles of American democracy clash with most the current health policies. In declaring themselves independent of an imposing British empire, our founders cemented the fundamental expectation that democratic government must ensure that its citizens’ rights are not alienated.
They did not, however, guarantee positive rights, which, in other words, mean a “freedom from [sickness, loss, failure],” rather than a “freedom to [live life as one sees fit].”
When applied to health care, this approach dictates that the government’s sole purpose is to protect citizens from “deviation from the norm;" to insure against insults (such as prohibiting a factory from polluting a town’s waterways), rather than to provide interventions.
To believe all people are entitled to artificially maintained physical well-being, the core principle of universal health care, would be to make the physician into a serf, treating patients into a servitude rather than a service, and the government into a despotic Tsar.
The only viable long-term approach to lower costs and improved health quality would be to disincentivize individuals from making poor choices by charging those who do, more for their coverage.
Despite concerns raised by progressive critics, such practices would be no more discriminatory than car insurance policies adding a surcharge for risky driving. Those suffering from genetic conditions or unpreventable cancers would not be punished any more than handicapped drivers are.
Unshackling the market from current regulations (which mandate everything from basic universal coverage to equal premium pricing) would allow insurers to provide plans tailored to individual health care needs and priced according to individual life choices.
Unfortunately, this approach stands at odds with the core tenets of much of the health legislation from both political parties today, which aims to help politicians win votes by distributing the burden of poor health rather than disincentivizing it — by doling out benefits today at our children’s expense tomorrow.
Government-run health care is inherently a redistribution scheme between those who make good choices and those who do not.
While our legislators attempt to appease political interests and costly patients, free-market advocates stand firm, championing the harsh reality that we need to bring costs down and serve individuals more efficiently.
Free Market Health Care
Health insurance was first adopted by employers to keep their employees healthy and productive, which, in turn, reduced turnover and bred efficiency through the free market. Such private-sector health care, the first of its kind in the world, has produced some of the greatest innovations of the 20th century.
From the anesthesia and antibiotics, which enabled the first surgeries, to the vaccines that let parents sleep at night. From the million dollar heart-stents to the targeted cancer therapies that took lifetimes to develop—our commitment to improving lives (and, of course, the profits that followed) have produced the strongest specialist care in the world.
However, as insurance was expanded to everyone, and as innovations yielded superbly expensive treatments and longer lifespans, consumers were shielded from the costs of their services. Immune to the price, patients kept running up the tab for insurers, including the largest of them all: the federal government. As the debt balloons to over 20 trillion dollars and our government remains gridlocked, placing the future of American health care in jeopardy, we must remember the long-forgotten triumphs of individualism and entrepreneurship that have brought us this far.
Learn from History's Mistakes
My parents and relatives, all survivors of the failed Soviet Union, can attest to the dangers of placing a “greater public good” above individual freedom.
The USSR believed it could maintain an enormous level of production without any means of rewarding individuals for their contributions, which led to stagnations in all sectors, massive corruption, and a booming black market for those who could afford it.The same fate awaits the American health care system if we do not heed its warnings.
Twenty-five years later, my parents (who were in Moscow at the time) still struggle to comprehend how the mighty Soviet Union could implode literally overnight. If we do not reign in our health care costs and government overreach today, there may not be a health care system left to debate tomorrow.
Over the past decades, the free market has provided consistent and superb specialist care—for the rich or poor— because physicians and hospitals knew their reputations and profits were on the line.
Government-managed medicine, under the guise of helping those most in need, does quite the opposite: it creates a two-tier system, arguably similar to that of Germany today. Physicians, unmotivated to perform their under-compensated public sector obligations, seek to provide superior private care on the side to those who can afford it.
Patients have no option to switch insurance companies or hospitals because such freestanding entities have long been abolished, and with them, any hope of free-market accountability or efficiency. Abandoning the unique principles our revolutionary nation was founded on runs the risk of betraying those very same people—the poor, weak, and sick—we seek to protect.
0 notes
dareread · 8 years ago
Link
The dismantling of the idea of the West began when medieval philosophers began re-introducing the Sophist notions reduced to ashes by Socrates. This reintroduction came about as a reaction to extreme scholasticism in the Middle Ages. It was a fascinating thought experiment known as nominalism, but it unwittingly wrought massive damage upon the very ways in which Western citizens viewed themselves, disconnecting them not only from other cultures and peoples but also from one another—even within the same communities. Rather than reaching, often in anguish and pain, toward the transcendent and universal, their own priests were telling them to look more toward the muck of this world, perhaps even to wallow in its filth as a simple fact of existence.
Despite the great successes of the Scottish (well, the Celts—Adam Smith, David Hume, and Edmund Burke) and Americans (especially Thomas Jefferson and John Dickinson) in the eighteenth century (and, with less success among the English) and their claims of universalism, the nineteenth century bred the narrowest thought of all—from men such as Georg W.F. Hegel, Charles Darwin, Karl Marx, Herbert Spencer, and Sigmund Freud. As each of these men possessed individual genius, they offered the deepest thought possible on a variety of things: biology, adaptation to the environment, economics, breeding, sexuality, and psychology. Yet, they focused so strongly on individual ideas (the particulars of life) that they displaced their ideas (many of which were true) from the context of many true things. So, while we are certainly economic beings, we are not merely economic beings. We are also emotional, stupid, brilliant, wicked, good, procreative, and a million other things, many of which we barely understand about our individual selves. As the great economist and man of letters, Friedrich Hayek noted, to attempt to understand another and not have reservations and humility about such a study is to engage in a “fatal conceit.”
Or as C.S. Lewis’s tragic character, William Hengist claimed in That Hideous Strength, “I happen to believe that you can’t study men: you can only get to know them, which is quite different.”
During the 1960s, the New Left—more culturally than economically Marxist—began a concerted effort to destroy the foundations of Western Civilization, claiming that the West was little more than a facade for a while, European males to maintain a powerful hegemony over society. The view, as the New Leftists argued it, was that such claims as natural law, natural rights, and opportunity for all were merely smokescreens, allowing for the elites to maintain control over the oppressed and the voiceless. Rather than standing for the Platonic and Socratic notions of the good, the true, and the beautiful, the West really maintained a power relationship of one group solidifying and perpetuating its control through sexism, racism, and imperialism. Given the progressive dismantling of liberal education six decades prior to the rise of the New Left, the New Left was able to gain control of departments and promote an agenda of destruction, though often with the best of intentions.
Certainly, there were those who defended a traditional understanding of Western civilization throughout the previous century. In the United States, Irving Babbitt of Harvard, Paul Elmer More of Princeton, Willa Cather, Russell Kirk, Robert Nisbet, William F. Buckley, Flannery O’Connor, and Sister Madeleva Wolff worked mightily to preserve the western tradition. Abroad, T.E. Hulme, T.S. Eliot, Christopher Dawson, Sigrid Undset, Theodor Haecker, Jacques Maritain, Wilhelm Roepke, and others did the same.
They were each, unfortunately, fighting a rear-guard action, defending the ramparts as the flood swept in and subsumed their efforts, no matter how noble.
Even the few schools that continue to claim to offer a true liberal education—Wyoming Catholic College, St. John’s College, Thomas Aquinas, Faulkner, Hillsdale, Notre Dame, and the University of St. Thomas (St. Paul, Minnesota)—are but bizarre counter-cultural institutions.
The West, for most Americans who even think about it, is simply the study of “dead white males” or a shorthand for “lazy white males.” As I write this, students across the United States have shut down universities and occupied buildings to protest the oppression—supposedly—still lingering from the 1960s. Thomas Jefferson, though liberally educated and the author of the Declaration of Independence, has come under fire, in particular, as a hypocrite, the protesting students seeing his enslaving but not his liberating.
And, yet the West has offered so much good in the world. It was not for nothing that Chinese students protested in Tiananmen Square in 1989 holding signs of Thomas Jefferson. The Chinese communists might have seen this as proof of CIA influence, but the rest of us understood that the symbol of Jefferson stood for something much greater than simply Jefferson himself: He stood for the entire Western tradition of natural law and natural rights, applicable to all persons and all times.
The West, as noted above, not only created philosophy (while all significant civilizations have accepted ethics, but, generally, only ethics), but it also introduced the ideas of natural law and natural rights.
The West has also understood that freedom and liberty are not ends, in and of themselves, but means by which we make free decisions toward the good, the true and the beautiful.
The West has also, critically, understood that while natural law and natural rights might be inherent in the very being of man himself, the securing of those rights comes with the cost of great sacrifices.
The West has also understood the necessity of myth, symbol, legend, allegory, and poetry as a means by which to pass the most important lessons from one generation to the next.
And, as the men around Socrates so wisely understood, we use the space offered by free choice to pursue not just the good, the true, and the beautiful, but we pursue these things through the four pagan virtues: prudence—the ability to discern good from evil; fortitude—the ability to persevere against all odds; justice—the giving of each man his due; and temperance—the use of earthly goods as a form of plastic, a means to an end. Later, of course, St. Paul would add the Christian virtues of faith—the ability to see beyond ourselves; hope—the confidence that we matter; and charity—the giving of one’s self for another.
0 notes
dareread · 8 years ago
Link
When Vivek Ranadivé decided to coach his daughter Anjali’s basketball team, he settled on two principles. The first was that he would never raise his voice. This was National Junior Basketball—the Little League of basketball. The team was made up mostly of twelve-year-olds, and twelve-year-olds, he knew from experience, did not respond well to shouting. He would conduct business on the basketball court, he decided, the same way he conducted business at his software firm. He would speak calmly and softly, and convince the girls of the wisdom of his approach with appeals to reason and common sense.
The second principle was more important. Ranadivé was puzzled by the way Americans played basketball. He is from Mumbai. He grew up with cricket and soccer. He would never forget the first time he saw a basketball game. He thought it was mindless. Team A would score and then immediately retreat to its own end of the court. Team B would inbound the ball and dribble it into Team A’s end, where Team A was patiently waiting. Then the process would reverse itself. A basketball court was ninety-four feet long. But most of the time a team defended only about twenty-four feet of that, conceding the other seventy feet. Occasionally, teams would play a full-court press—that is, they would contest their opponent’s attempt to advance the ball up the court. But they would do it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadivé thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?
Ranadivé looked at his girls. Morgan and Julia were serious basketball players. But Nicky, Angela, Dani, Holly, Annika, and his own daughter, Anjali, had never played the game before. They weren’t all that tall. They couldn’t shoot. They weren’t particularly adept at dribbling. They were not the sort who played pickup games at the playground every evening. Most of them were, as Ranadivé says, “little blond girls” from Menlo Park and Redwood City, the heart of Silicon Valley. These were the daughters of computer programmers and people with graduate degrees. They worked on science projects, and read books, and went on ski vacations with their parents, and dreamed about growing up to be marine biologists. Ranadivé knew that if they played the conventional way—if they let their opponents dribble the ball up the court without opposition—they would almost certainly lose to the girls for whom basketball was a passion. Ranadivé came to America as a seventeen-year-old, with fifty dollars in his pocket. He was not one to accept losing easily. His second principle, then, was that his team would play a real full-court press, every game, all the time. The team ended up at the national championships. “It was really random,” Anjali Ranadivé said. “I mean, my father had never played basketball before.”
David’s victory over Goliath, in the Biblical account, is held to be an anomaly. It was not. Davids win all the time. The political scientist Ivan Arreguín-Toft recently looked at every war fought in the past two hundred years between strong and weak combatants. The Goliaths, he found, won in 71.5 per cent of the cases. That is a remarkable fact. Arreguín-Toft was analyzing conflicts in which one side was at least ten times as powerful—in terms of armed might and population—as its opponent, and even in those lopsided contests the underdog won almost a third of the time.
In the Biblical story of David and Goliath, David initially put on a coat of mail and a brass helmet and girded himself with a sword: he prepared to wage a conventional battle of swords against Goliath. But then he stopped. “I cannot walk in these, for I am unused to it,” he said (in Robert Alter’s translation), and picked up those five smooth stones. What happened, Arreguín-Toft wondered, when the underdogs likewise acknowledged their weakness and chose an unconventional strategy? He went back and re-analyzed his data. In those cases, David’s winning percentage went from 28.5 to 63.6. When underdogs choose not to play by Goliath’s rules, they win, Arreguín-Toft concluded, “even when everything we think we know about power says they shouldn’t.”
Consider the way T. E. Lawrence (or, as he is better known, Lawrence of Arabia) led the revolt against the Ottoman Army occupying Arabia near the end of the First World War. The British were helping the Arabs in their uprising, and the initial focus was Medina, the city at the end of a long railroad that the Turks had built, running south from Damascus and down through the Hejaz desert. The Turks had amassed a large force in Medina, and the British leadership wanted Lawrence to gather the Arabs and destroy the Turkish garrison there, before the Turks could threaten the entire region.
But when Lawrence looked at his ragtag band of Bedouin fighters he realized that a direct attack on Medina would never succeed. And why did taking the city matter, anyway? The Turks sat in Medina “on the defensive, immobile.” There were so many of them, consuming so much food and fuel and water, that they could hardly make a major move across the desert. Instead of attacking the Turks at their point of strength, Lawrence reasoned, he ought to attack them where they were weak—along the vast, largely unguarded length of railway line that was their connection to Damascus. Instead of focussing his attention on Medina, he should wage war over the broadest territory possible.
The Bedouins under Lawrence’s command were not, in conventional terms, skilled troops. They were nomads. Sir Reginald Wingate, one of the British commanders in the region, called them “an untrained rabble, most of whom have never fired a rifle.” But they were tough and they were mobile. The typical Bedouin soldier carried no more than a rifle, a hundred rounds of ammunition, forty-five pounds of flour, and a pint of drinking water, which meant that he could travel as much as a hundred and ten miles a day across the desert, even in summer. “Our cards were speed and time, not hitting power,” Lawrence wrote. “Our largest available resources were the tribesmen, men quite unused to formal warfare, whose assets were movement, endurance, individual intelligence, knowledge of the country, courage.” The eighteenth-century general Maurice de Saxe famously said that the art of war was about legs, not arms, and Lawrence’s troops were all legs. In one typical stretch, in the spring of 1917, his men dynamited sixty rails and cut a telegraph line at Buair on March 24th, sabotaged a train and twenty-five rails at Abu al-Naam on March 25th, dynamited fifteen rails and cut a telegraph line at Istabl Antar on March 27th, raided a Turkish garrison and derailed a train on March 29th, returned to Buair and sabotaged the railway line again on March 31st, dynamited eleven rails at Hediah on April 3rd, raided the train line in the area of Wadi Dhaiji on April 4th and 5th, and attacked twice on April 6th.
Lawrence’s masterstroke was an assault on the port town of Aqaba. The Turks expected an attack from British ships patrolling the waters of the Gulf of Aqaba to the west. Lawrence decided to attack from the east instead, coming at the city from the unprotected desert, and to do that he led his men on an audacious, six-hundred-mile loop—up from the Hejaz, north into the Syrian desert, and then back down toward Aqaba. This was in summer, through some of the most inhospitable land in the Middle East, and Lawrence tacked on a side trip to the outskirts of Damascus, in order to mislead the Turks about his intentions. “This year the valley seemed creeping with horned vipers and puff-adders, cobras and black snakes,” Lawrence writes in “The Seven Pillars of Wisdom” of one stage in the journey:
We could not lightly draw water after dark, for there were snakes swimming in the pools or clustering in knots around their brinks. Twice puff-adders came twisting into the alert ring of our debating coffee-circle. Three of our men died of bites; four recovered after great fear and pain, and a swelling of the poisoned limb. Howeitat treatment was to bind up the part with snake-skin plaster and read chapters of the Koran to the sufferer until he died.
When they finally arrived at Aqaba, Lawrence’s band of several hundred warriors killed or captured twelve hundred Turks, and lost only two men. The Turks simply did not think that their opponent would be mad enough to come at them from the desert. This was Lawrence’s great insight. David can beat Goliath by substituting effort for ability—and substituting effort for ability turns out to be a winning formula for underdogs in all walks of life, including little blond-haired girls on the basketball court.
Vivek Ranadivé is an elegant man, slender and fine-boned, with impeccable manners and a languorous walk. His father was a pilot who was jailed by Indira Gandhi, he says, because he wouldn’t stop challenging the safety of India’s planes. Ranadivé went to M.I.T., because he saw a documentary on the school and decided that it was perfect for him. This was in the nineteen-seventies, when going abroad for undergraduate study required the Indian government to authorize the release of foreign currency, and Ranadivé camped outside the office of the governor of the Reserve Bank of India until he got his way. The Ranadivés are relentless.
In 1985, Ranadivé founded a software company in Silicon Valley devoted to what in the computer world is known as “real time” processing. If a businessman waits until the end of the month to collect and count his receipts, he’s “batch processing.” There is a gap between the events in the company—sales—and his understanding of those events. Wall Street used to be the same way. The information on which a trader based his decisions was scattered across a number of databases. The trader would collect information from here and there, collate and analyze it, and then make a trade. What Ranadivé’s company, tibco, did was to consolidate those databases into one stream, so that the trader could collect all the data he wanted instantaneously. Batch processing was replaced by real-time processing. Today, tibco’s software powers most of the trading floors on Wall Street.
Ranadivé views this move from batch to real time as a sort of holy mission. The shift, to his mind, is one of kind, not just of degree. “We’ve been working with some airlines,” he said. “You know, when you get on a plane and your bag doesn’t, they actually know right away that it’s not there. But no one tells you, and a big part of that is that they don’t have all their information in one place. There are passenger systems that know where the passenger is. There are aircraft and maintenance systems that track where the plane is and what kind of shape it’s in. Then, there are baggage systems and ticketing systems—and they’re all separate. So you land, you wait at the baggage terminal, and it doesn’t show up.” Everything bad that happens in that scenario, Ranadivé maintains, happens because of the lag between the event (the luggage doesn’t make it onto the plane) and the response (the airline tells you that your luggage didn’t make the plane). The lag is why you’re angry. The lag is why you had to wait, fruitlessly, at baggage claim. The lag is why you vow never to fly that airline again. Put all the databases together, and there’s no lag. “What we can do is send you a text message the moment we know your bag didn’t make it,” Ranadivé said, “telling you we’ll ship it to your house.”
A few years ago, Ranadivé wrote a paper arguing that even the Federal Reserve ought to make its decisions in real time—not once every month or two. “Everything in the world is now real time,” he said. “So when a certain type of shoe isn’t selling at your corner shop, it’s not six months before the guy in China finds out. It’s almost instantaneous, thanks to my software. The world runs in real time, but government runs in batch. Every few months, it adjusts. Its mission is to keep the temperature comfortable in the economy, and, if you were to do things the government’s way in your house, then every few months you’d turn the heater either on or off, overheating or underheating your house.” Ranadivé argued that we ought to put the economic data that the Fed uses into a big stream, and write a computer program that sifts through those data, the moment they are collected, and make immediate, incremental adjustments to interest rates and the money supply. “It can all be automated,” he said. “Look, we’ve had only one soft landing since the Second World War. Basically, we’ve got it wrong every single time.”
You can imagine what someone like Alan Greenspan or Ben Bernanke might say about that idea. Such people are powerfully invested in the notion of the Fed as a Solomonic body: that pause of five or eight weeks between economic adjustments seems central to the process of deliberation. To Ranadivé, though, “deliberation” just prettifies the difficulties created by lag. The Fed has to deliberate because it’s several weeks behind, the same way the airline has to bow and scrape and apologize because it waited forty-five minutes to tell you something that it could have told you the instant you stepped off the plane.
Is it any wonder that Ranadivé looked at the way basketball was played and found it mindless? A professional basketball game was forty-eight minutes long, divided up into alternating possessions of roughly twenty seconds: back and forth, back and forth. But a good half of each twenty-second increment was typically taken up with preliminaries and formalities. The point guard dribbled the ball up the court. He stood above the top of the key, about twenty-four feet from the opposing team’s basket. He called out a play that the team had choreographed a hundred times in practice. It was only then that the defending team sprang into action, actively contesting each pass and shot. Actual basketball took up only half of that twenty-second interval, so that a game’s real length was not forty-eight minutes but something closer to twenty-four minutes—and that twenty-four minutes of activity took place within a narrowly circumscribed area. It was as formal and as convention-bound as an eighteenth-century quadrille. The supporters of that dance said that the defensive players had to run back to their own end, in order to compose themselves for the arrival of the other team. But the reason they had to compose themselves, surely, was that by retreating they allowed the offense to execute a play that it had practiced to perfection. Basketball was batch!
Insurgents, though, operate in real time. Lawrence hit the Turks, in that stretch in the spring of 1917, nearly every day, because he knew that the more he accelerated the pace of combat the more the war became a battle of endurance—and endurance battles favor the insurgent. “And it happened as the Philistine arose and was drawing near David that David hastened and ran out from the lines toward the Philistine,” the Bible says. “And he reached his hand into the pouch and took from there a stone and slung it and struck the Philistine in his forehead.” The second sentence—the slingshot part—is what made David famous. But the first sentence matters just as much. David broke the rhythm of the encounter. He speeded it up. “The sudden astonishment when David sprints forward must have frozen Goliath, making him a better target,” the poet and critic Robert Pinsky writes in “The Life of David.” Pinsky calls David a “point guard ready to flick the basketball here or there.” David pressed. That’s what Davids do when they want to beat Goliaths.
Ranadivé’s basketball team played in the National Junior Basketball seventh-and-eighth-grade division, representing Redwood City. The girls practiced at Paye’s Place, a gym in nearby San Carlos. Because Ranadivé had never played basketball, he recruited a series of experts to help him. The first was Roger Craig, the former all-pro running back for the San Francisco 49ers, who is also tibco’s director of business development. As a football player, Craig was legendary for the off-season hill workouts he put himself through. Most of his N.F.L. teammates are now hobbling around golf courses. He has run seven marathons. After Craig signed on, he recruited his daughter Rometra, who played Division I basketball at Duke and U.S.C. Rometra was the kind of person you assigned to guard your opponent’s best player in order to shut her down. The girls loved Rometra. “She has always been like my big sister,” Anjali Ranadivé said. “It was so awesome to have her along.”
Redwood City’s strategy was built around the two deadlines that all basketball teams must meet in order to advance the ball. The first is the inbounds pass. When one team scores, a player from the other team takes the ball out of bounds and has five seconds to pass it to a teammate on the court. If that deadline is missed, the ball goes to the other team. Usually, that’s not an issue, because teams don’t contest the inbounds pass. They run back to their own end. Redwood City did not. Each girl on the team closely shadowed her counterpart. When some teams play the press, the defender plays behind the offensive player she’s guarding, to impede her once she catches the ball. The Redwood City girls, by contrast, played in front of their opponents, to prevent them from catching the inbounds pass in the first place. And they didn’t guard the player throwing the ball in. Why bother? Ranadivé used that extra player as a floater, who could serve as a second defender against the other team’s best player. “Think about football,” Ranadivé said. “The quarterback can run with the ball. He has the whole field to throw to, and it’s still damned difficult to complete a pass.” Basketball was harder. A smaller court. A five-second deadline. A heavier, bigger ball. As often as not, the teams Redwood City was playing against simply couldn’t make the inbounds pass within the five-second limit. Or the inbounding player, panicked by the thought that her five seconds were about to be up, would throw the ball away. Or her pass would be intercepted by one of the Redwood City players. Ranadivé’s girls were maniacal.
The second deadline requires a team to advance the ball across mid-court, into its opponent’s end, within ten seconds, and if Redwood City’s opponents met the first deadline the girls would turn their attention to the second. They would descend on the girl who caught the inbounds pass and “trap” her. Anjali was the designated trapper. She’d sprint over and double-team the dribbler, stretching her long arms high and wide. Maybe she’d steal the ball. Maybe the other player would throw it away in a panic—or get bottled up and stalled, so that the ref would end up blowing the whistle. “When we first started out, no one knew how to play defense or anything,” Anjali said. “So my dad said the whole game long, ‘Your job is to guard someone and make sure they never get the ball on inbounds plays.’ It’s the best feeling in the world to steal the ball from someone. We would press and steal, and do that over and over again. It made people so nervous. There were teams that were a lot better than us, that had been playing a long time, and we would beat them.”
The Redwood City players would jump ahead 4–0, 6–0, 8–0, 12–0. One time, they led 25–0. Because they typically got the ball underneath their opponent’s basket, they rarely had to take low-percentage, long-range shots that required skill and practice. They shot layups. In one of the few games that Redwood City lost that year, only four of the team’s players showed up. They pressed anyway. Why not? They lost by three points.
“What that defense did for us is that we could hide our weaknesses,” Rometra Craig said. She helped out once Redwood City advanced to the regional championships. “We could hide the fact that we didn’t have good outside shooters. We could hide the fact that we didn’t have the tallest lineup, because as long as we played hard on defense we were getting steals and getting easy layups. I was honest with the girls. I told them, ‘We’re not the best basketball team out there.’ But they understood their roles.” A twelve-year-old girl would go to war for Rometra. “They were awesome,” she said.
Lawrence attacked the Turks where they were weak—the railroad—and not where they were strong, Medina. Redwood City attacked the inbounds pass, the point in a game where a great team is as vulnerable as a weak one. Lawrence extended the battlefield over as large an area as possible. So did the girls of Redwood City. They defended all ninety-four feet. The full-court press is legs, not arms. It supplants ability with effort. It is basketball for those “quite unused to formal warfare, whose assets were movement, endurance, individual intelligence . . . courage.”
“It’s an exhausting strategy,” Roger Craig said. He and Ranadivé were in a tibcoconference room, reminiscing about their dream season. Ranadivé was at the whiteboard, diagramming the intricacies of the Redwood City press. Craig was sitting at the table.
“My girls had to be more fit than the others,” Ranadivé said.
“He used to make them run,” Craig said, nodding approvingly.
“We followed soccer strategy in practice,” Ranadivé said. “I would make them run and run and run. I couldn’t teach them skills in that short period of time, and so all we did was make sure they were fit and had some basic understanding of the game. That’s why attitude plays such a big role in this, because you’re going to get tired.” He turned to Craig. “What was our cheer again?”
The two men thought for a moment, then shouted out happily, in unison, “One, two, three, attitude!”
That was it! The whole Redwood City philosophy was based on a willingness to try harder than anyone else.
“One time, some new girls joined the team,” Ranadivé said, “and so in the first practice I had I was telling them, ‘Look, this is what we’re going to do,’ and I showed them. I said, ‘It’s all about attitude.’ And there was this one new girl on the team, and I was worried that she wouldn’t get the whole attitude thing. Then we did the cheer and she said, ‘No, no, it’s not One, two three, attitude. It’s One, two, three, attitude hah ’ ”—at which point Ranadivé and Craig burst out laughing.
In January of 1971, the Fordham University Rams played a basketball game against the University of Massachusetts Redmen. The game was in Amherst, at the legendary arena known as the Cage, where the Redmen hadn’t lost since December of 1969. Their record was 11–1. The Redmen’s star was none other than Julius Erving—Dr. J. The UMass team was very, very good. Fordham, by contrast, was a team of scrappy kids from the Bronx and Brooklyn. Their center had torn up his knee the first week of the season, which meant that their tallest player was six feet five. Their starting forward—and forwards are typically almost as tall as centers—was Charlie Yelverton, who was six feet two. But from the opening buzzer the Rams launched a full-court press, and never let up. “We jumped out to a thirteen-to-six lead, and it was a war the rest of the way,” Digger Phelps, the Fordham coach at the time, recalls. “These were tough city kids. We played you ninety-four feet. We knew that sooner or later we were going to make you crack.” Phelps sent in one indefatigable Irish or Italian kid from the Bronx after another to guard Erving, and, one by one, the indefatigable Irish and Italian kids fouled out. None of them were as good as Erving. It didn’t matter. Fordham won, 87–79.
In the world of basketball, there is one story after another like this about legendary games where David used the full-court press to beat Goliath. Yet the puzzle of the press is that it has never become popular. People look at upsets like Fordham over UMass and call them flukes. Basketball sages point out that the press can be beaten by a well-coached team with adept ball handlers and astute passers—and that is true. Ranadivé readily admitted that all an opposing team had to do to beat Redwood City was press back: the girls were not good enough to handle their own medicine. Playing insurgent basketball did not guarantee victory. It was simply the best chance an underdog had of beating Goliath. If Fordham had played UMass the conventional way, it would have lost by thirty points. And yet somehow that lesson has escaped the basketball establishment.
What did Digger Phelps do, the season after his stunning upset of UMass? He never used the full-court press the same way again. The UMass coach, Jack Leaman, was humbled in his own gym by a bunch of street kids. Did he learn from his defeat and use the press himself the next time he had a team of underdogs? He did not.
The only person who seemed to have absorbed the lessons of that game was a skinny little guard on the UMass freshman team named Rick Pitino. He didn’t play that day. He watched, and his eyes grew wide. Even now, thirty-eight years later, he can name, from memory, nearly every player on the Fordham team: Yelverton, Sullivan, Mainor, Charles, Zambetti. “They came in with the most unbelievable pressing team I’d ever seen,” Pitino said. “Five guys between six feet five and six feet. It was unbelievable how they covered ground. I studied it. There is no way they should have beaten us. Nobody beat us at the Cage.”
Pitino became the head coach at Boston University in 1978, when he was twenty-five years old, and used the press to take the school to its first N.C.A.A. tournament appearance in twenty-four years. At his next head-coaching stop, Providence College, Pitino took over a team that had gone 11–20 the year before. The players were short and almost entirely devoid of talent—a carbon copy of the Fordham Rams. They pressed, and ended up one game away from playing for the national championship. At the University of Kentucky, in the mid-nineteen-nineties, Pitino took his team to the Final Four three times—and won a national championship—with full-court pressure, and then rode the full-court press back to the Final Four in 2005, as the coach at the University of Louisville. This year, his Louisville team entered the N.C.A.A. tournament ranked No. 1 in the land. College coaches of Pitino’s calibre typically have had numerous players who have gone on to be bona-fide all-stars at the professional level. In his many years of coaching, Pitino has had one, Antoine Walker. It doesn’t matter. Every year, he racks up more and more victories.
“The greatest example of the press I’ve ever coached was my Kentucky team in ’96, when we played L.S.U.,” Pitino said. He was at the athletic building at the University of Louisville, in a small room filled with television screens, where he watches tapes of opponents’ games. “Do we have that tape?” Pitino called out to an assistant. He pulled a chair up close to one of the monitors. The game began with Kentucky stealing the ball from L.S.U., deep in L.S.U.’s end. Immediately, the ball was passed to Antoine Walker, who cut to the basket for a layup. L.S.U. got the ball back. Kentucky stole it again. Another easy basket by Walker. “Walker had almost thirty points at halftime,” Pitino said. “He dunked it almost every time. When we steal, he just runs to the basket.” The Kentucky players were lightning quick and long-armed, and swarmed around the L.S.U. players, arms flailing. It was mayhem. Five minutes in, it was clear that L.S.U. was panicking.
Pitino trains his players to look for what he calls the “rush state” in their opponents—that moment when the player with the ball is shaken out of his tempo—and L.S.U. could not find a way to get out of the rush state. “See if you find one play that L.S.U. managed to run,” Pitino said. You couldn’t. The L.S.U. players struggled to get the ball inbounds, and, if they did that, they struggled to get the ball over mid-court, and on those occasions when they managed both those things they were too overwhelmed and exhausted to execute their offense the way they had been trained to. “We had eighty-six points at halftime,” Pitino went on—eighty-six points being, of course, what college basketball teams typically score in an entire game. “And I think we’d forced twenty-three turnovers at halftime,” twenty-three turnovers being what college basketball teams might force in two games. “I love watching this,” Pitino said. He had a faraway look in his eyes. “Every day, you dream about getting a team like this again.” So why are there no more than a handful of college teams who use the full-court press the way Pitino does?
Arreguín-Toft found the same puzzling pattern. When an underdog fought like David, he usually won. But most of the time underdogs didn’t fight like David. Of the two hundred and two lopsided conflicts in Arreguín-Toft’s database, the underdog chose to go toe to toe with Goliath the conventional way a hundred and fifty-two times—and lost a hundred and nineteen times. In 1809, the Peruvians fought the Spanish straight up and lost; in 1816, the Georgians fought the Russians straight up and lost; in 1817, the Pindaris fought the British straight up and lost; in the Kandyan rebellion of 1817, the Sri Lankans fought the British straight up and lost; in 1823, the Burmese chose to fight the British straight up and lost. The list of failures was endless. In the nineteen-forties, the Communist insurgency in Vietnam bedevilled the French until, in 1951, the Viet Minh strategist Vo Nguyen Giap switched to conventional warfare—and promptly suffered a series of defeats. George Washington did the same in the American Revolution, abandoning the guerrilla tactics that had served the colonists so well in the conflict’s early stages. “As quickly as he could,” William Polk writes in “Violent Politics,” a history of unconventional warfare, Washington “devoted his energies to creating a British-type army, the Continental Line. As a result, he was defeated time after time and almost lost the war.”
It makes no sense, unless you think back to that Kentucky-L.S.U. game and to Lawrence’s long march across the desert to Aqaba. It is easier to dress soldiers in bright uniforms and have them march to the sound of a fife-and-drum corps than it is to have them ride six hundred miles through the desert on the back of a camel. It is easier to retreat and compose yourself after every score than swarm about, arms flailing. We tell ourselves that skill is the precious resource and effort is the commodity. It’s the other way around. Effort can trump ability—legs, in Saxe’s formulation, can overpower arms—because relentless effort is in fact something rarer than the ability to engage in some finely tuned act of motor coördination.
“I have so many coaches come in every year to learn the press,” Pitino said. Louisville was the Mecca for all those Davids trying to learn how to beat Goliaths. “Then they e-mail me. They tell me they can’t do it. They don’t know if they have the bench. They don’t know if the players can last.” Pitino shook his head. “We practice every day for two hours straight,” he went on. “The players are moving almost ninety-eight per cent of the practice. We spend very little time talking. When we make our corrections”—that is, when Pitino and his coaches stop play to give instruction—“they are seven-second corrections, so that our heart rate never rests. We are always working.” Seven seconds! The coaches who came to Louisville sat in the stands and watched that ceaseless activity and despaired. The prospect of playing by David’s rules was too daunting. They would rather lose.
In 1981, a computer scientist from Stanford University named Doug Lenat entered the Traveller Trillion Credit Squadron tournament, in San Mateo, California. It was a war game. The contestants had been given several volumes of rules, well beforehand, and had been asked to design their own fleet of warships with a mythical budget of a trillion dollars. The fleets then squared off against one another in the course of a weekend. “Imagine this enormous auditorium area with tables, and at each table people are paired off,” Lenat said. “The winners go on and advance. The losers get eliminated, and the field gets smaller and smaller, and the audience gets larger and larger.”
Lenat had developed an artificial-intelligence program that he called Eurisko, and he decided to feed his program the rules of the tournament. Lenat did not give Eurisko any advice or steer the program in any particular strategic direction. He was not a war-gamer. He simply let Eurisko figure things out for itself. For about a month, for ten hours every night on a hundred computers at Xerox parc, in Palo Alto, Eurisko ground away at the problem, until it came out with an answer. Most teams fielded some version of a traditional naval fleet—an array of ships of various sizes, each well defended against enemy attack. Eurisko thought differently. “The program came up with a strategy of spending the trillion on an astronomical number of small ships like P.T. boats, with powerful weapons but absolutely no defense and no mobility,” Lenat said. “They just sat there. Basically, if they were hit once they would sink. And what happened is that the enemy would take its shots, and every one of those shots would sink our ships. But it didn’t matter, because we had so many.” Lenat won the tournament in a runaway.
The next year, Lenat entered once more, only this time the rules had changed. Fleets could no longer just sit there. Now one of the criteria of success in battle was fleet “agility.” Eurisko went back to work. “What Eurisko did was say that if any of our ships got damaged it would sink itself—and that would raise fleet agility back up again,” Lenat said. Eurisko won again.
Eurisko was an underdog. The other gamers were people steeped in military strategy and history. They were the sort who could tell you how Wellington had outfoxed Napoleon at Waterloo, or what exactly happened at Antietam. They had been raised on Dungeons and Dragons. They were insiders. Eurisko, on the other hand, knew nothing but the rule book. It had no common sense. As Lenat points out, a human being understands the meaning of the sentences “Johnny robbed a bank. He is now serving twenty years in prison,” but Eurisko could not, because as a computer it was perfectly literal; it could not fill in the missing step—“Johnny was caught, tried, and convicted.” Eurisko was an outsider. But it was precisely that outsiderness that led to Eurisko’s victory: not knowing the conventions of the game turned out to be an advantage.
“Eurisko was exposing the fact that any finite set of rules is going to be a very incomplete approximation of reality,” Lenat explained. “What the other entrants were doing was filling in the holes in the rules with real-world, realistic answers. But Eurisko didn’t have that kind of preconception, partly because it didn’t know enough about the world.” So it found solutions that were, as Lenat freely admits, “socially horrifying”: send a thousand defenseless and immobile ships into battle; sink your own ships the moment they get damaged.
This is the second half of the insurgent’s creed. Insurgents work harder than Goliath. But their other advantage is that they will do what is “socially horrifying”—they will challenge the conventions about how battles are supposed to be fought. All the things that distinguish the ideal basketball player are acts of skill and coördination. When the game becomes about effort over ability, it becomes unrecognizable—a shocking mixture of broken plays and flailing limbs and usually competent players panicking and throwing the ball out of bounds. You have to be outside the establishment—a foreigner new to the game or a skinny kid from New York at the end of the bench—to have the audacity to play it that way. George Washington couldn’t do it. His dream, before the war, was to be a British Army officer, finely turned out in a red coat and brass buttons. He found the guerrillas who had served the American Revolution so well to be “an exceeding dirty and nasty people.” He couldn’t fight the establishment, because he was the establishment.
T. E. Lawrence, by contrast, was the farthest thing from a proper British Army officer. He did not graduate with honors from Sandhurst. He was an archeologist by trade, a dreamy poet. He wore sandals and full Bedouin dress when he went to see his military superiors. He spoke Arabic like a native, and handled a camel as if he had been riding one all his life. And David, let’s not forget, was a shepherd. He came at Goliath with a slingshot and staff because those were the tools of his trade. He didn’t know that duels with Philistines were supposed to proceed formally, with the crossing of swords. “When the lion or the bear would come and carry off a sheep from the herd, I would go out after him and strike him down and rescue it from his clutches,” David explained to Saul. He brought a shepherd’s rules to the battlefield.
The price that the outsider pays for being so heedless of custom is, of course, the disapproval of the insider. Why did the Ivy League schools of the nineteen-twenties limit the admission of Jewish immigrants? Because they were the establishment and the Jews were the insurgents, scrambling and pressing and playing by immigrant rules that must have seemed to the Wasp élite of the time to be socially horrifying. “Their accomplishment is well over a hundred per cent of their ability on account of their tremendous energy and ambition,” the dean of Columbia College said of the insurgents from Brooklyn, the Bronx, and the Lower East Side. He wasn’t being complimentary. Goliath does not simply dwarf David. He brings the full force of social convention against him; he has contempt for David.
“In the beginning, everyone laughed at our fleet,” Lenat said. “It was really embarrassing. People felt sorry for us. But somewhere around the third round they stopped laughing, and some time around the fourth round they started complaining to the judges. When we won again, some people got very angry, and the tournament directors basically said that it was not really in the spirit of the tournament to have these weird computer-designed fleets winning. They said that if we entered again they would stop having the tournament. I decided the best thing to do was to graciously bow out.”
It isn’t surprising that the tournament directors found Eurisko’s strategies beyond the pale. It’s wrong to sink your own ships, they believed. And they were right. But let’s remember who made that rule: Goliath. And let’s remember why Goliath made that rule: when the world has to play on Goliath’s terms, Goliath wins.
The trouble for Redwood City started early in the regular season. The opposing coaches began to get angry. There was a sense that Redwood City wasn’t playing fair—that it wasn’t right to use the full-court press against twelve-year-old girls, who were just beginning to grasp the rudiments of the game. The point of basketball, the dissenting chorus said, was to learn basketball skills. Of course, you could as easily argue that in playing the press a twelve-year-old girl learned something much more valuable—that effort can trump ability and that conventions are made to be challenged. But the coaches on the other side of Redwood City’s lopsided scores were disinclined to be so philosophical.
“There was one guy who wanted to have a fight with me in the parking lot,” Ranadivé said. “He was this big guy. He obviously played football and basketball himself, and he saw that skinny, foreign guy beating him at his own game. He wanted to beat me up.”
Roger Craig says that he was sometimes startled by what he saw. “The other coaches would be screaming at their girls, humiliating them, shouting at them. They would say to the refs—‘That’s a foul! That’s a foul!’ But we weren’t fouling. We were just playing aggressive defense.”
“My girls were all blond-haired white girls,” Ranadivé said. “My daughter is the closest we have to a black girl, because she’s half-Indian. One time, we were playing this all-black team from East San Jose. They had been playing for years. These were born-with-a-basketball girls. We were just crushing them. We were up something like twenty to zero. We wouldn’t even let them inbound the ball, and the coach got so mad that he took a chair and threw it. He started screaming at his girls, and of course the more you scream at girls that age the more nervous they get.” Ranadivé shook his head: never, ever raise your voice. “Finally, the ref physically threw him out of the building. I was afraid. I think he couldn’t stand it because here were all these blond-haired girls who were clearly inferior players, and we were killing them.”
At the nationals, the Redwood City girls won their first two games. In the third round, their opponents were from somewhere deep in Orange County. Redwood City had to play them on their own court, and the opponents supplied their own referee as well. The game was at eight o’clock in the morning. The Redwood City players left their hotel at six, to beat the traffic. It was downhill from there. The referee did not believe in “One, two, three, attitude hah.” He didn’t think that playing to deny the inbounds pass was basketball. He began calling one foul after another.
“They were touch fouls,” Craig said. Ticky-tacky stuff. The memory was painful.
“My girls didn’t understand,” Ranadivé said. “The ref called something like four times as many fouls on us as on the other team.”
“People were booing,” Craig said. “It was bad.”
“A two-to-one ratio is understandable, but a ratio of four to one?” Ranadivé shook his head.
“One girl fouled out.”
“We didn’t get blown out. There was still a chance to win. But . . .”
Ranadivé called the press off. He had to. The Redwood City players retreated to their own end, and passively watched as their opponents advanced down the court. They did not run. They paused and deliberated between each possession. They played basketball the way basketball is supposed to be played, and they lost—but not before making Goliath wonder whether he was a giant, after all.
0 notes
dareread · 8 years ago
Link
By now, most of us have heard about Google's so-called "anti-diversity" manifestoand how James Damore, the engineer who wrote it, has been fired from his job.
Titled Google's Ideological Echo Chamber, Mr. Damore called out the current PC culture, saying the gender gap in Google's diversity was not due to discrimination, but inherent differences in what men and women find interesting. Danielle Brown, Google's newly appointed vice-president for diversity, integrity and governance, accused the memo of advancing "incorrect assumptions about gender," and Mr. Damore confirmed last night he was fired for "perpetuating gender stereotypes."
Despite how it's been portrayed, the memo was fair and factually accurate. Scientific studies have confirmed sex differences in the brain that lead to differences in our interests and behaviour.
As mentioned in the memo, gendered interests are predicted by exposure to prenatal testosterone – higher levels are associated with a preference for mechanically interesting things and occupations in adulthood. Lower levels are associated with a preference for people-oriented activities and occupations. This is why STEM (science, technology, engineering and mathematics) fields tend to be dominated by men.
We see evidence for this in girls with a genetic condition called congenital adrenal hyperplasia, who are exposed to unusually high levels of testosterone in the womb. When they are born, these girls prefer male-typical, wheeled toys, such as trucks, even if their parents offer more positive feedback when they play with female-typical toys, such as dolls. Similarly, men who are interested in female-typical activities were likely exposed to lower levels of testosterone.
As well, new research from the field of genetics shows that testosterone alters the programming of neural stem cells, leading to sex differences in the brain even before it's finished developing in utero. This further suggests that our interests are influenced strongly by biology, as opposed to being learned or socially constructed.
Many people, including a former Google employee, have attempted to refute the memo's points, alleging that they contradict the latest research.
I'd love to know what "research done […] for decades" he's referring to, because thousands of studies would suggest otherwise. A single study, published in 2015, did claim that male and female brains existed along a "mosaic" and that it isn't possible to differentiate them by sex, but this has been refuted by four – yes, four – academic studies since.
This includes a study that analyzed the exact same brain data from the original study and found that the sex of a given brain could be correctly identified with 69-per-cent to 77-per-cent accuracy.
Of course, differences exist at the individual level, and this doesn't mean environment plays no role in shaping us. But to claim that there are no differences between the sexes when looking at group averages, or that culture has greater influence than biology, simply isn't true.
In fact, research has shown that cultures with greater gender equity have larger sex differences when it comes to job preferences, because in these societies, people are free to choose their occupations based on what they enjoy.
As the memo suggests, seeking to fulfill a 50-per-cent quota of women in STEM is unrealistic. As gender equity continues to improve in developing societies, we should expect to see this gender gap widen.
This trend continues into the area of personality, as well. Contrary to what detractors would have you believe, women are, on average, higher in neuroticism and agreeableness, and lower in stress tolerance.
Some intentionally deny the science because they are afraid it will be used to justify keeping women out of STEM. But sexism isn't the result of knowing facts; it's the result of what people choose to do with them.
This is exactly what the mob of outrage should be mobilizing for, instead of denying biological reality and being content to spend a weekend doxxing a man so that he would lose his job. At this point, as foreshadowed in Mr. Damore's manifesto, we should be more concerned about viewpoint diversity than diversity revolving around gender.
0 notes
dareread · 8 years ago
Link
As for President Donald Trump himself, there’s no evidence he’s taken any interest in an Athenian historian born almost 500 years before Jesus Christ. (Not that Trump has anything against Greece: “I love the Greeks. Oh, do I love them,” Trump said at a Greek Independence Day event in March. “Don't forget, I come from New York—that’s all I see is Greeks, they are all over the place.”)But Trump might approve of the ancient Greek scholar’s sway over his senior strategists. Thucydides is considered a father of the “realist” school of international relations, which holds that nations act out of pragmatic self-interest with little regard for ideology, values or morality. “He was the founder of realpolitik,” Allison says. This view is distilled in the famous Melian Dialogue, a set of surrender talks that feature the cold-eyed conclusion that right and wrong means nothing in the face of raw strength. “In the real world, the strong do what they will and the weak suffer what they must,” concludes an Athenian ambassador—a Trumpian statement 2½ millennia before The Donald’s time.The conservative military historian and Thucydides expert Victor Davis Hanson knows McMaster, Mattis and Bannon to varying degrees, and says they can apply useful lessons about the Peloponnesian War to a fracturing world. “I think their knowledge of Thucydides might remind them that the world works according to perceived self-interest, not necessarily idealism as expressed in the General Assembly of the U.N.,” Hanson says. “That does not mean they are cynical as much as they are not naive.”In recent months, both Mattis and McMaster have publicly cited Thucydides’ diagnosis of the three factors that drive nations to conflict. “People fight today for the same reasons Thucydides identified 2,500 years ago: fear, honor and interest,” McMaster wrote in a July 2013 New York Times op-ed that argued for bringing historical perspective to military challenges. Mattis also endorsed the universal power of “fear, honor and interest” during his confirmation hearing (prompting Maine Senator Angus King to announce that he had stored the quote in his phone).The Trump White House isn’t known as a hot spot for Ivy League intellectuals. But last month, a Harvard academic slipped into the White House complex for an unusual meeting. Graham Allison, an avuncular foreign policy thinker who served under Reagan and Clinton, was paying a visit to the National Security Council, where he briefed a group of staffers on one of history’s most studied conflicts—a brutal war waged nearly 2,500 years ago, one whose lessons still resonate, even in the administration of a president who doesn’t like to read.
The subject was America’s rivalry with China, cast through the lens of ancient Greece. The 77-year-old Allison is the author of a recent book based on the writings of Thucydides, the ancient historian famous for his epic chronicle of the Peloponnesian War between the Greek states of Athens and Sparta. Allison cites the Greek scholar’s summation of why the two powers fought: “What made war inevitable was the growth of Athenian power and the fear which this caused in Sparta.” He warns that the same dynamic could drive this century’s rising empire, China, and the United States into a war neither wants. Allison calls this the “Thucydides Trap,” and it’s a question haunting some very important people in the Trump administration, particularly as Chinese officials arrive Wednesday for “diplomatic and security dialogue” talks between Washington and Beijing designed, in large part, to avoid conflict between the world’s two strongest nations.
It might seem curious that an ancient Greek would cast a shadow over a meeting between a group of diplomats and generals from America and Asia. Most Americans probably don’t know Thucydides from Mephistopheles. But the Greek writer is a kind of demigod to international relations theorists and military historians, revered for his elegant chronicle of one of history’s most consequential wars, and his timeless insights into the nature of politics and warfare. The Yale University historian Donald Kagan calls Thucydides’ account “a source of wisdom about the behavior of human beings under the enormous pressures imposed by war, plague, and civil strife.”
Thucydides is especially beloved by the two most influential figures on Trump’s foreign policy team. National security adviser H.R. McMaster has called Thucydides’ work an “essential” military text, taught it to students and quoted from it in speeches and op-eds. Defense Secretary James Mattis is also fluent in Thucydides’ work: “If you say to him, ‘OK, how about the Melian Dialogue?’ he could tell you exactly what it is,” Allison says—referring to one particularly famous passage. When former Defense Secretary William Cohen introduced him at his confirmation hearing, Cohen said Mattis was likely the only person present “who can hear the words ‘Thucydides Trap’ and not have to go to Wikipedia to find out what it means.”
That’s not true in the Trump White House, where another Peloponnesian War aficionado can be found in the office of chief strategist Steve Bannon. A history buff fascinated with grand conflict, Bannon once even used “Sparta”—one of the most militarized societies history has known—as a computer password. (“He talked a lot about Sparta,” his former Hollywood writing partner, Julia Jones, told The Daily Beast. An unnamed former colleague recalled for the New Yorker Bannon’s “long diatribes” about the Peloponnesian War.)
In an August 2016 article for his former employer, Breitbart News, Bannon likened the conservative media rivalry between Breitbart and Fox News to the Peloponnesian War, casting Breitbart as the disciplined warrior state of Sparta challenging a decadently Athenian Fox. There’s also NSC spokesman Michael Anton, a student of the classics who owns two copies of Thucydides’ fabled work. (“The acid test for me is: Do you read the Hobbes translation?” he says. “If you’ve read that translation, you’ve got my respect.”)
That’s a lot of Greek history for any administration, never mind one led by our current tweeter-in-chief. “Most people in Washington have almost no historical memory or grounding,” Allison says. “Mattis reads a lot of books. McMaster can quote more central lines from more books than anybody I know. And Bannon reads a huge amount of history. So I think this is an unusual configuration.” Allison also left a copy of his new book, Destined for War: Can America and China Escape Thucydides’s Trap? for Anton, in whose West Wing office it now resides. Another copy went to Matthew Pottinger, the NSC’s Asia director, who invited Allison to address his colleagues last month.
As for President Donald Trump himself, there’s no evidence he’s taken any interest in an Athenian historian born almost 500 years before Jesus Christ. (Not that Trump has anything against Greece: “I love the Greeks. Oh, do I love them,” Trump said at a Greek Independence Day event in March. “Don't forget, I come from New York—that’s all I see is Greeks, they are all over the place.”)
But Trump might approve of the ancient Greek scholar’s sway over his senior strategists. Thucydides is considered a father of the “realist” school of international relations, which holds that nations act out of pragmatic self-interest with little regard for ideology, values or morality. “He was the founder of realpolitik,” Allison says. This view is distilled in the famous Melian Dialogue, a set of surrender talks that feature the cold-eyed conclusion that right and wrong means nothing in the face of raw strength. “In the real world, the strong do what they will and the weak suffer what they must,” concludes an Athenian ambassador—a Trumpian statement 2½ millennia before The Donald’s time.
The conservative military historian and Thucydides expert Victor Davis Hanson knows McMaster, Mattis and Bannon to varying degrees, and says they can apply useful lessons about the Peloponnesian War to a fracturing world. “I think their knowledge of Thucydides might remind them that the world works according to perceived self-interest, not necessarily idealism as expressed in the General Assembly of the U.N.,” Hanson says. “That does not mean they are cynical as much as they are not naive.”
In recent months, both Mattis and McMaster have publicly cited Thucydides’ diagnosis of the three factors that drive nations to conflict. “People fight today for the same reasons Thucydides identified 2,500 years ago: fear, honor and interest,” McMaster wrote in a July 2013 New York Times op-ed that argued for bringing historical perspective to military challenges. Mattis also endorsed the universal power of “fear, honor and interest” during his confirmation hearing (prompting Maine Senator Angus King to announce that he had stored the quote in his phone).
Mattis was answering another senator’s somewhat puzzled question about the meaning of the ‘Thucydides Trap,” raised earlier in the hearing by Cohen. The Marine general wouldn’t endorse the theory that the U.S. and China are on a collision course. But he did say that “we’re going to have to manage that competition between us and China,” and that the U.S. would have to “maintain a very strong military so our diplomats are always engaging from a position of strength when we deal with a rising power.”
Kori Schake, a former George W. Bush State Department official at Stanford University’s Hoover Institution who co-wrote a 2016 book with Mattis, has spoken with Mattis about Thucydides, whose history, she says, gives the Pentagon chief “a rich appreciation for the way democratic societies can talk themselves into folly and destruction, as Athens (the rising power) does. It illustrates for him the danger of action without careful analysis of consequences.”
A U.S. military conflict with China would be a global disaster. But while Allison believes it is entirely possible, he does not call it inevitable. His book identifies 16 historical case studies in which an established power like Sparta (or the United States) was confronted with a fast-rising rival like Athens (or China). Twelve of those cases led to war. Four were resolved peacefully. Allison hopes that readers—including officials in the Trump administration—can draw from the latter examples. “I am writing this history to help people not make mistakes,” he says.
Allison’s theory, which he first promoted in 2015, has caught the attention of the Chinese themselves. During a visit to Seattle that September, Chinese President Xi Jinping addressed the gloomy prospect of a collision course, saying there is “no such thing as the so-called Thucydides Trap in the world,” while adding that if major nations “time and again make the mistakes of strategic miscalculation, they might create such traps for themselves.”
If Trump really is confronted with a historical trap, it remains unclear how he might escape it. Senior Trump officials complain that the U.S. has accommodated China’s rise for decades, hoping that integration into the Western economic system would alter its Communist values. That hasn't happened. But it’s not clear how Trump might try to reverse the trend. He hasn’t followed through on his provocative campaign pledges to declare China a currency manipulator and impose huge tariffs on its exports, instead forging a chummy relationship with Xi that has so far focused on bringing Chinese pressure to bear on North Korea.
Some China experts say Allison’s theory has implications that Trump isn’t likely to countenance. “If you’re worried about the Thucydides Trap, then you try to adopt a set of policies that reduce the threat of confrontation, and ultimately seek to reassure China,” says Evan Medeiros, a former NSC Asia director for the Obama White House. “That’s very different from the hard-core realist view of the world, especially with people like Bannon.” (In his 2016 article on the Fox-Breitbart rivalry, Bannon writes that Thucydides “would warn” Fox executives that their failure to take Breitbart’s rise more seriously “will only accelerate Fox’s fall.”)
Trump’s strategy is still a work in progress. On Tuesday, he tweeted that China’s assistance on the North Korea problem “has not worked out. At least I know China tried!” He did not explain what the consequences might be, though they will presumably be a topic of conversation when Mattis and Secretary of State Rex Tillerson meet senior Chinese diplomatic and military officials in Washington this week.
Thucydides may not be part of that conversation. Allison’s theory is just one application of the Greek historian’s insight. There are others, as Schake notes—including lessons focused more on a nation’s internal threats than on external foes.
“Most of all, Thucydides’ history is a story of the devastation that political disunion brings to a vibrant republic,” she says, “something Secretary Mattis often talks about and every American should worry about.”
0 notes
dareread · 8 years ago
Link
Recently Eerdmans published The Authority of the Christian Scriptures.1 It is a rather big book, with about thirty-five contributors, all of them experts in their fields. The hope and prayer that guided the project were that this volume of essays would be used by God to stabilize worldwide evangelicalism—and not only evangelicals, but all who hold to confessional Christianity. More recently, however, I have been pondering the fact that many Christians slide away from full confidence in the trustworthiness of Scripture for reasons that are not so much intellectual as broadly cultural. I am not now thinking of the college student brought up in a confessional home who goes to university and is for the first time confronted with informed and charming intellectuals whose reasoning calls into question the structure and fabric of his or her Christian belief. Clearly that student needs a lot more information; the period of doubt is often a rite of passage. No, in these jottings I’m reflecting on subtle ways in which we may reduce Scripture’s authority in our lives—and the “we” refers to many Christians in the world, especially the Western world, and not least pastors and scholars. If they then introduce intellectual and cognitive objections to the authority of Scripture in order to bolster the move toward skepticism that they have already begun, a focus on such intellectual and cognitive objections, however necessary, is in danger of addressing symptoms without diagnosing the problem.
It might be useful to try to identify some of these subtle factors.
1. An Appeal to Selective Evidence
The most severe forms of this drift are well exemplified in the teaching and preaching of the HWPG—the health, wealth, and prosperity gospel. Link together some verses about God sending prosperity to the land with others that reflect on the significance of being a child of the King, and the case is made—provided, of course, that we ignore the many passages about taking up our cross, about suffering with Christ so that we may reign with him, about rejoicing because we are privileged to suffer for the name, and much more. These breaches are so egregious that they are easy to spot. What I’m thinking of now is something subtler: the simple refusal to talk about disputed matters in order to sidestep controversy in the local church. For the sake of peace, we offer anodyne treatments of hot topics (poverty, racism, homosexual marriage, distinctions between men and women) in the forlorn hope that some of these topics will eventually go away. The sad reality is that if we do not try to shape our thinking on such topics under the authority of Scripture, the result is that many of us will simply pick up the culture’s thinking on them.
The best antidote is systematic expository preaching, for such preaching forces us to deal with texts as they come up. Topical preaching finds it easier to avoid the hard texts. Yet cultural blinders can easily afflict expositors, too. A Christian preacher I know in a major Muslim nation says he loves to preach evangelistically, especially around Christmas, from Matthew 1 and 2, because these chapters include no fewer than five reports of dreams and visions—and dreams and visions in the dominant culture of his country are commonly accorded great respect. When I have preached through Matthew 1 and 2, I have never focused on those five dreams and visions (though I haven’t entirely ignored them), precisely because such dreams and visions are not customarily accorded great credibility in my culture. In other words, ruthless self-examination of one’s motives and biases, so far as we are aware of them, can go a long way to mitigating this problem.
2. Heart Embarrassment before the Text
This is a more acute form of the first failure. Not infrequently preachers avoid certain topics, in part because those topics embarrass them. The embarrassment may arise from the preacher’s awareness that he has not yet sufficiently studied the topic so as to give him the confidence to tackle it (e.g., some elements of eschatology, transgenderism), or because of some general unease at the topic (e.g., predestination), or because the preacher knows his congregation is sharply divided on the topic (any number of possibilities), or because the preacher simply really does not like the subject even though it surfaces pretty often in the Bible (e.g., hell, eternal judgment). In its ugliest form, the preacher says something like this: “Our passage this morning, Luke 16:19–31, like quite a number of other passages drawn from the life of Jesus, depicts hell in some pretty shocking ways. Frankly, I wish I could avoid these passages. They leave me distinctly uncomfortable. But of course, I cannot ignore them entirely, for after all they are right here in the Bible.” The preacher has formally submitted to Scripture’s authority, while presenting himself as someone who is more compassionate or more sensitive than Jesus. This is as deceptive as it is wicked—and it is easy to multiply examples.
Contrast the apostle Paul: “Therefore, since through God’s mercy we have this ministry, we do not lose heart. Rather, we have renounced secret and shameful ways; we do not use deception, nor do we distort the word of God. On the contrary, by setting forth the truth plainly we commend ourselves to everyone’s conscience in the sight of God” (2 Cor 4:1–2).
3. Publishing Ventures That Legitimate What God Condemns
Recently Zondervan published Two Views on Homosexuality, the Bible, and the Church;2 this book bills these two views as “affirming” and “non-affirming,” and two authors support each side. Both sides, we are told, argue “from Scripture.” If the “affirming” side was once viewed as a stance that could not be held by confessional evangelicals, this book declares that not only the non-affirming stance but the affirming stance are represented within the evangelical camp, so the effect of this book is to present alternative evangelical positions, one that thinks the Bible prohibits homosexual marriage, and the other that embraces it.
All who read these lines will of course be aware of the many books that proffer three views or four views (or two, or five) on this or that subject: the millennium, election, hell, baptism, and many more. Surely this new book on homosexuality is no different. To this a couple of things must be said.
(a) The format of such volumes, “x views on y,” is intrinsically slippery. It can be very helpful to students to read, in one volume, diverse stances on complex subjects, yet the format is in danger of suggesting that each option is equally “biblical” because it is argued “from Scripture.” Of course, Jehovah’s Witnesses argue “from Scripture,” but most of us would hasten to add that their exegesis, nominally “from Scripture,” is woefully lacking. The “x views on y” format tilts evaluation away from such considerations, baptizing each option with at least theoretical equivalent legitimacy. In short, the “x views on y” format, as useful as it is for some purposes, is somewhat manipulative. As I have argued elsewhere, not all disputed things are properly disputable.3
(b) Otherwise put, it is generally the case that books of the “x views on y” format operate within some implicit confessional framework or other. That’s why no book of this sort has (yet!) been published with a title such as “Three Views on Whether Jesus is God.” We might bring together a liberal committed to philosophical naturalism, a Jehovah’s Witness, and a confessional Christian. But it’s hard to imagine a book like that getting published—or, more precisely, a book like that would be tagged as a volume on comparative religion, not a volume offering options for Christians. Most books of the “x views on y” sort restrict the subject, the y-component, to topics that are currently allowed as evangelical options. To broaden this list to include an option that no evangelical would have allowed ten years ago—say, the denial of the deity of Jesus, or the legitimacy of homosexual practice—is designed simultaneously to assert that Scripture is less clear on the said topic than was once thought, and to re-define, once again, the borders of evangelicalism. On both counts, the voice of Scripture as the norma normans (“the rule that rules”), though theoretically still intact, has in fact been subtly reduced.
Inevitably, there have been some articulate voices that insist that adopting an “affirming” stance on homosexual marriage does not jeopardize one’s salvation and should not place such a person outside the evangelical camp. For example, in his essay “An Evangelical Approach to Sexual Ethics,” Stephen Holmes concludes, “Sola Fide. I have to stand on that. Because the Blood flowed where I walk and where we all walk. One perfect sacrifice complete, once for all offered for all the world, offering renewal to all who will put their faith in Him. And if that means me, in all my failures and confusions, then it also means my friends who affirm same-sex marriage, in all their failures and confusions. If my faithful and affirming friends have no hope of salvation, then nor do I.”4 But this is an abuse of the evangelical insistence on sola fide. I do not know any Christian who thinks that salvation is appropriated by means of faith plus an affirmation of heterosexuality. Faith alone is the means by which sola gratia is appropriated. Nevertheless, that grace is so powerful it transforms. Salvation by grace alone through faith alone issues in a new direction under the lordship of King Jesus. Those who are sold out to the “acts of the flesh ... will not inherit the kingdom of God” (Gal 5:19–21). The apostle Paul makes a similar assertion in 1 Corinthians 6:9–11:
Or do you not know that wrongdoers will not inherit the kingdom of God? Do not be deceived: Neither the sexually immoral nor idolaters nor adulterers not men who have sex with men nor thieves nor the greedy nor drunkards nor slanderers nor swindlers will inherit the kingdom of God. And that is what some of you were. But you were washed, you were sanctified, you were justified in the name of the Lord Jesus Christ and by the Spirit of our God (emphasis added).
In the context of Paul’s thought, he is not saying that without sinless perfection there is no entrance into the kingdom, but he is saying that such sins—whether greed or adultery or homosexual practice or whatever—no longer characterize the washed, sanctified, and justified. In other words, it is one thing to affirm with joy that sola fide means that we appropriate the merits of Christ and his cross by faith alone, not by our holiness—that holiness is the product of salvation, not its condition—and it is quite another thing to say that someone may self-consciously affirm the non-sinfulness of what God has declared to be sin, of what God insists excludes a person from the kingdom, and say that it doesn’t matter because sola fide will get them in anyway. The Scriptures make a lot of room for believers who slip and slide in “failures and confusions,” as Holmes put it, but who rest in God’s grace and receive it in God-given faith; they do not leave a lot of room for those who deny they are sinning despite what God says. Sola gratia and sola fide are always accompanied by sola Scriptura, by solus Christus, and by soli Deo gloria.
Or again, one really must question the recent argument by Alan Jacobs, from whose books and essays I have gained a great deal over the years. In his essay “On False Teachers: Bleat the Third,”5 however, Jacobs argues that when we warn against doctrine that is so dangerous it must be labeled and condemned, one naturally thinks of 2 Peter 2, where Peter warns against false teachers analogous to false prophets in the old covenant, and 1 Timothy 4, where Paul warns us against doctrines of demons. What is remarkable, Jacobs argues, is that when Paul rebukes Peter in Antioch (Gal 2:11–14), he tells him he is not walking in line with the gospel, but he does not label him a “false teacher.” If Paul can be so restrained in rebuking Peter over conduct that challenged the very heart of the gospel, then should we not allow a very wide swath of what we perceive to be inappropriate conduct before we assert someone is a false teacher and expounding doctrines of demons? As Jacobs summarizes: “So if we can be as wrong as Peter was about something as foundational for the Gospel and still not be denounced as a false teacher, then I think it follows that if people do not ‘walk correctly’ in relation to biblical teaching about sexuality, they likewise should not be treated as pseudodidaskaloi [false teachers] but can be seen as brothers and sisters whom those who hold the traditional views patiently strive to correct, without coming out from among them, speaking with the patience and gentleness commended in 2 Timothy 3:24–25.” Against this, the following must be said.
(a) In Galatians 2:11–14, Paul is building off his argument (2:1–13) that Paul and Peter enjoy theological agreement. Peter’s problem, Paul thinks, is that Peter’s conduct is inconsistent with his theological commitments. This is all the clearer when we see that Peter’s preference for eating with “those from James” has to do not with any alleged confusion in his mind about justification, but with this concern for the persecution his fellow Jews are enduring back home in Jerusalem at the hands of “the circumcision group.”6In any case, this is rather different from the current situation in which some voices are insisting that homosexual marriage is not wrong. Paul is not saying that Peter’s theology is wrong, but that his conduct is not in line with his theology. Incidentally, Jacobs assumes, probably correctly, that the Jerusalem Council (Acts 15) occurs after this episode in Antioch, prompting him to comment, “... and of course Paul’s view won out at the Council of Jerusalem (where, I have always thought comically, Peter presents it as his own view, with no reference to Paul having corrected him).”7 But there is nothing comical about Peter’s stance at the Council: Paul himself insists that so far as their theological understanding goes, he and Peter are in agreement, so it is neither surprising nor comical to find Peter saying the same thing.
(b) It is not clear to me why Jacobs rests so much weight on the “false teacher” passage in Peter and the “doctrines of demons” passage in Paul. There are plenty of other passages that deploy quite different terminology and that insist that false doctrine or untransformed behavior keep one out of the kingdom: Matthew 7:21–23; 11:21–24; Luke 16:19–31; Romans 1:18–3:20; Galatians 1:8–9; Revelation 13–14, to name but a few.
(c) Despite the best efforts of bad exegesis, the Bible makes it clear that treating homosexuality as if it were not a sin, but a practice in which people should feel perfectly free to engage, keeps one out of the kingdom (as we have seen: e.g., 1 Corinthians 6:9–11). There is nothing more serious than that, and the seriousness is present whether or not a particular term, such as pseudodidaskalos (“false teacher”) is used.
From time to time, expansion of the frame of reference of what has traditionally been called evangelicalism has been influenced by William J. Webb’s trajectory hermeneutic, which argues that sometimes it is not what Scripture actually says that is authoritative but rather the direction to which it points.8 His favorite example is slavery; his favorite application of that example is the role of women. This trajectory hermeneutic has been adequately discussed elsewhere;9 it would be inappropriate to rehearse the hermeneutic here. What cannot be denied, however, is that this way of reading the Bible diminishes the authority of what the Bible actually says in favor of what the interpreter judges to be the end goal of the Bible’s trajectory after the Bible has been written and circulated. One of the latest examples is the defense mounted by Pete Briscoe and his elders as the Bent Tree Bible Fellowship in Dallas embraces egalitarianism, a defense that specifically references Webb’s work.10 Further, Briscoe says he has moved the debate over egalitarianism and complementarianism into the “agree to disagree” category, which may function well enough in the cadre of evangelicalism as a movement, but can only function practically at the level of the local church if one side or the other is actually being followed at the expense of the other. In any case, the “agree to disagree” argument nicely brings us to my fourth point:
4. “The Art of Imperious Ignorance”
The words are in quotation marks because they are borrowed from Mike Ovey’s column in a recent issue of Themelios.11 This is the stance that insists that all the relevant biblical passages on a stated subject are exegetically confusing and unclear, and therefore we cannot know (hence “imperious”) the mind of God on that subject. The historical example that Ovey adduces is the decision of a church Council during the patristic period whose decisions have mostly been forgotten by non-specialists. At a time of great controversy over Christology—specifically, over the deity of Christ—the Council of Sirmium (357), which sided with the pro-Arians, pronounced a prohibition against using terms like homoousios (signaling “one and the same substance”) and homoiousios (signaling “of a similar substance”). In other words, Sirmium prohibited using the technical terms espoused by both sides, on the ground that the issues are so difficult and the evidence so obscure that we cannot know the truth. Sirmium even adduced a biblical proof-text: “Who shall declare his generation?” they asked: i.e., it is all too mysterious.
Nevertheless, the orthodox fathers Hilary of Poitiers and Athanasius of Alexandria assessed the stance of Sirmium as worse than error: it was, they said, blasphemy. They decried the element of compulsion in Sirmium’s decree, and insisted that it was absurd: how is it possible to legislate the knowledge of other people? But the blasphemous element surfaces in the fact that the decree tries to put an end to the confession of true propositions (e.g., the eternal generation of the Son). Practically speaking, the claim of dogmatic ignorance, ostensibly arising from Scripture’s lack of clarity, criticizes Scripture while allowing people to adopt the positions they want.
This art of imperious ignorance is not unknown or unpracticed today. For example, both in a recent book and in an article,12 David Gushee argues that homosexual marriage should be placed among the things over which we agree to disagree, what used to be called adiaphora, indifferent things. He predicts that “conservatives” and “progressives” are heading for an unfortunate divorce over this and a handful of other issues, precisely because they cannot agree to disagree. He may be right. In all fairness, however, in addition to the question of whether one’s behavior in the domain of sexuality has eternal consequences, it must be said, gently but firmly, that the unified voice of both Scripture and tradition on homosexuality has not been on the side of the “progressives”: see especially the book by S. Donald Fortson III and Rollin G. Grams, Unchanging Witness: The Consistent Christian Teaching on Homosexuality in Scripture and Tradition.13 As Trevin Wax has pointed out, on this subject the “progressives” innovate on teaching and conduct and thus start the schism, and then accuse the “conservatives” of drawing lines and promoting schism instead of agreeing to disagree.14
A somewhat similar pattern can be found in the arguments of Jen and Brandon Hatmaker. Most of their posts are winsome and compassionate, full of admirable concern for the downtrodden and oppressed. Their recent move in support of monogamous homosexual marriage has drawn a lot of attention: after devoting time to studying the subject, they say, they have come to the conclusion that the biblical texts do not clearly forbid homosexual conduct if it is a monogamous commitment, but condemn only conduct that is promiscuous (whether heterosexual or homosexual), rape, and other grievous offenses.15 In his explanation of their move, Brandon testifies that after seeing so much pain in the homosexual community, the Hatmakers set themselves “a season of study and prayer,” and arrived at this conclusion: “Bottom line, we don’t believe a committed life-long monogamous same-sex marriage violates anything seen in scripture about God’s hopes for the marriage relationship.”16 Quite apart from the oddity of the expression “God’s hopes for the marriage relationship,” Brandon’s essay extravagantly praises ethicist David Gushee, and ends his essay by citing John 13:34–35 (Jesus’s “new command” to his disciples to “love one another”).
Among the excellent responses, three deserve mention here.17
(a) Speaking out of her own remarkable conversion, Rosaria Butterfield counsels her readers to love their neighbors enough to speak the truth.18 “Love” that does not care enough to speak the truth and warn against judgment to come easily reduces to sentimentality.
(b) With his inimitable style, Kevin DeYoung briefly but decisively challenges what he calls “the Hatmaker hermeneutic.”19 To pick up on just one of his points:
I fail to see how the logic for monogamy and against fornication is obvious according to Hatmaker’s hermeneutic. I appreciate that they don’t want to completely jettison orthodox Christian teaching when it comes to sex and marriage. But the flimsiness of the hermeneutic cannot support the weight of the tradition. Once you’ve concluded that the creation of Adam and Eve has nothing to do with a procreative telos (Mal. 2:15), or the fittedness of male with female (Gen. 2:18), or the joining of two complementary sexes into one organic union (Gen. 2:23–24), what’s left to insist that marriage must be limited to two persons, or that the two persons must be faithful to each other? Sure, both partners may agree that they want fidelity, but there is no longer anything inherent to the ontology and the telos of marriage to insist that sexual fidelity is a must. Likewise, why is it obvious that sex outside of marriage is wrong? Perhaps those verses were only dealing with oppressive situations too. Most foundationally, once stripped of the biological orientation toward children, by what internal logic can we say that consensual sex between two adults is wrong? And on that score, by what measure can we condemn a biological brother and sister getting married if they truly love each other (and use contraceptives, just to take the possibility of genetic abnormalities out of the equation)? When marriage is redefined to include persons of the same sex, we may think we are expanding the institution to make it more inclusive, but in fact we are diminishing it to the point where it is something other than marriage.
(c) And finally, I should mention another piece by Kevin DeYoung, presented in his inimitable style as a “Breakout” session at T4G on 13 April 2016, titled, “Drawing Boundaries in an Inclusive Age: Are Some Doctrines More Fundamental Than Others and How Do We Know What They Are?” I have not yet seen that piece online, but one hopes its appearance will not be long delayed, and he has given me permission to mention it here.
I have devoted rather extended discussion to this topic, because nowhere does “the art of imperious ignorance” make a stronger appeal, in our age, than to issues of sexuality. By the same token, there are few topics where contemporary believers are more strongly tempted to slip away from whole-hearted submission the Scripture’s authority in our own lives.
The rest of my points, although they deserve equal attention, I shall outline more briefly.
5. Allowing the Categories of Systematic Theology to Domesticate What Scripture Says
Most emphatically, this point is neither belittling systematic theology nor an attempt to sideline the discipline. When I warn against the danger of systematic theology domesticating what Scripture says, I nevertheless gladly insist that, properly deployed, systematic theology enriches, deepens, and safeguards our exegesis. The old affirmation that theology is the queen of the sciences has much to commend it. The best of systematic theology not only attempts to bring together all of Scripture in faithful ways, but also at its best enjoys a pedagogical function that helps to steer exegesis away from irresponsible options that depend on mere linguistic manipulation, by consciously taking into account the witness of the entire canon. Such theology-disciplined exegesis is much more likely to learn from the past than exegesis that shucks off everything except the faddish.
So there are ways in which exegesis shapes systematic theology and ways in which systematic theology shapes exegesis. That is not only as it should be; it is inevitable. Yet the authority of Scripture in our lives is properly unique. Systematic theology is corrigible; Scripture is not (although our exegesis of Scripture certainly is).
Failure to think through the implications of this truth makes it easy for us to allow the categories of systematic theology to domesticate what Scripture says. The categories we inherit or develop in our systematic theology may so constrain our thinking about what the Bible says that the Bible’s own voice is scarcely heard. Thus diminished, the authority of the Bible is insufficient to reform our systematic theology. Recently I was re-reading Exodus 7–11. After each of the first nine “plagues” with which God chastened the Egyptians, we read variations of “Pharaoh hardened his heart” and “God hardened Pharaoh’s heart” and “Pharaoh’s heart was hardened.” I could not help but remember with shame and regret some of the exegetical sins of my youth. Barely twenty years old at the time, I was invited to speak to a group of young people, and carefully explained the three stages: first, Pharaoh hardens his own heart; second, as a result, Pharaoh’s heart is hardened; and finally, God imposes his own final sanction: he judicially hardens Pharaoh’s heart. Of course, I was aware that the narrative did not display those three expressions in this convenient psychological order, but the homiletical point seemed to me, at the time, too good to pass up—it simply is the way these things develop, isn’t it? So my theology of the time, shallow as it was, domesticated the text. Only later did I learn how commonly the Bible juxtaposes human responsibility and divine sovereignty without the smallest discomfort, without allowing the slightest hint that the affirmation of the one dilutes belief in the other (e.g., Gen 50:19–20; Isa 10:5–19; Acts 4:27–28). It is the part of humility and wisdom not to allow our theological categories to domesticate what Scripture says.
6. Too Little Reading, Especially the Reading of Older Commentaries and Theological Works
The more general failure of too little reading contributes, of course, to some of the paths that tend with time to hobble the authority of Scripture, paths already articulated. The obvious one is that we do not grow out of youthful errors and reductionisms; we prove unable to self-correct; our shallow theology becomes ossified. Thus too little reading is partly to blame for my irresponsible exegesis of Exodus 7–11 (#5, above), or to downplaying the cultural importance of dreams and visions in other parts of the world (cf. #1, above). But a more focused problem frequently surfaces, one that requires separate notice. Too little reading, especially the reading of older confessional material, not infrequently leads to in an infatuation with current agendas, to intoxication by the over-imbibing of the merely faddish.
Of course, the opposite failure is not unknown. Many of us are acquainted with ministers who read deeply from the wells of Puritan resources, but who have not tried to read much contemporary work. Their language, thought-categories, illustrations, and agendas tend to sound almost four centuries old. But that is not the problem I am addressing here, mostly because, as far as I can see, it is far less common than the failure to read older confessional materials, not least commentaries and theological works.
The problem with reading only contemporary work is that we all sound so contemporary that our talks and sermons soon descend to the level of kitsch. We talk fluently about the importance of self-identity, ecological responsibility, tolerance, becoming a follower of Jesus (but rarely becoming a Christian), how the Bible helps us in our pain and suffering, and conduct seminars on money management and divorce recovery. Not for a moment would I suggest that the Bible fails to address such topics—but the Bible is not primarily about such topics. If we integrate more reading of, say, John Chrysostom, John Calvin, and John Flavel (to pick on three Johns), we might be inclined to devote more attention in our addresses to what it means to be made in the image of God, to the dreadfulness of sin, to the nature of the gospel, to the blessed Trinity, to truth, to discipleship, to the Bible’s insistence that Christians will suffer, to learning how to die well, to the prospect of the new heaven and the new earth, to the glories of the new covenant, to the sheer beauty of Jesus Christ, to confidence in a God who is both sovereign and good, to the non-negotiability of repentance and faith, to the importance of endurance and perseverance, to the beauty of holiness and the importance of the local church. Is the Bible truly authoritative in our lives and ministries when we skirt these and other truly important themes that other generations of Christians rightly found in the Bible?
7. The Failure to be Bound by Both the Formal Principle and the Material Principle
The distinction between these two principles was well known among an earlier generation of evangelicals. The formal principle that constrains us is the authority of Scripture; the material principle that constrains us is the substance of Scripture, the gospel itself. And we need both.
That the formal principle by itself is inadequate is obvious as soon as we recall that groups such as Jehovah’s Witnesses and Mormons happily and unreservedly affirm the Bible’s truthfulness, reliability, and authority, but their understanding of what the Bible says (the material principle) is so aberrant that (we insist) they do not in reality let the Bible’s authoritative message transform their thinking. On the other hand, today it is not uncommon to find Christians saying that they refuse to talk about biblical authority or biblical inerrancy or the like, but simply get on with preaching what the Bible says. History shows that such groups tend rather quickly to drift away from what the Bible says.
In other words, to be bound by only one of these two principles tends toward a drifting away from hearty submission to the Bible’s authority. If one begins with adherence to the formal principle, thus nominally espousing the biblical authority of the formal principle, and adds penetrating understanding of and submission to what the Bible actually says, the result is much stronger, much more stable. Conversely, if one begins with an honest effort to grasp and teach what the Bible says, thus nominally espousing the material principle, and adds resolute adherence to the formal principle, one is much more likely to keep doing the honest exegesis that will enrich, revitalize, and correct what one thinks the Bible is saying.
8. Undisciplined Passion for the Merely Technical, or Unhealthy Suspicion of the Technical
By the “technical” I am referring to biblical study that deploys the full panoply of literary tools that begin with the original languages and pay attention to syntax, literary genre, text criticism and literary criticism, parallel sources, interaction with recent scholarship, and much more. An exclusive focus on technical study of the Bible may, surprisingly, dilute the “listening” element: manipulation of the tools and interaction with the scholars of the guild are more important than trembling before the Word of God. Conversely, some so disdain careful and informed study that they seduce themselves into thinking that pious reading absolves one from careful and accurate exegesis. In both cases, the Bible’s real authority is diminished.
A variation of this concern surfaces when students arrive at the seminary and begin their course of study. Often they enter with boundless love for the Bible and a hunger to read it and think about it. Soon, however, they are enmeshed in memorizing Greek paradigms and struggling to work their way through short passages of the Greek New Testament. What they are doing now, they feel, is not so much reading the Word of God as homework, and it is hard. Instead of simply reading the Bible and being blessed, they are required to make decisions as to whether a verb should be taken this way or that, and whether an inherited interpretation really can withstand accurate exegesis. Confused and not a little discouraged by the demand to memorize defective Hebrew verbs, they talk to sympathetic professors and ask what is wrong, and what they can do about the coldness they feel stealing over their hearts.
It is not helpful to tell such students that, on the one hand, they simply need to get on with the discipline of study, and, on the other, they need to preserve time for devotional reading of the Bible. That bifurcation of tasks suggests there is no need to be devotional when using technical tools, and no need for rigor when reading the Bible devotionally. Far better to insist that even when they are wrestling with difficult verbal forms and challenging syntax, what they are working on is the Word of God—and it is always imperative to cherish that fact, and treat the biblical text with reverence. And similarly, if when reading the Bible for private edification and without reference to any assignment, one stumbles across a passage one really does not understand, one is not sinning against God if one pulls a commentary or two off the shelf and tries to obtain some technical help.
In short, one should not be seduced by merely technical disciplines, nor should one eschew them. In every case, the Bible remains the authoritative Word of God regardless of the “tools” one deploys to understand it better, and it functions authoritatively in our lives when we manage a better integration of technical study and devotional reading.
9. Undisciplined Confidence in Contemporary Philosophical Agendas
Many examples could be provided. For example, some of the choices offered by analytic philosophy wrongfully exclude structures of thought the Bible maintains.20 Or again, the most recent book by Charles Taylor,21 written in the heritage of some forms of deconstruction and, like all his work, inevitably stimulating, argues that language is in some measure cut off from reality: it is not so much something that designates as the medium in which we exist. There is no fixed “meaning” to texts (which is very hard to square with the conviction expressed in Jude 3). One form of this approach to texts, often dubbed American Pragmatism,22thinks of readers as “users” of the text. A “good” reading, for example, is one that meets specific needs in the reader or the community. There is much to be said in favor of this stance, but it becomes self-defeating when it says, in effect, that a “good” reading meets particular needs on the part of the reader or community, and must not be thought of as conveying timeless truth. Occasionally entire commentaries are today written out of this philosophical commitment. Yet as many have pointed out, the stance is self-defeating: American Pragmatism defends itself with an ostensible timeless truth about the virtues of American Pragmatism. Pretty soon the commentaries that work out of this tradition do not so much help us think about God and his character and work, as about what we think we need and how the biblical texts meet those needs. The door is opened to interpretations that are exploitative, merely current, sometimes cutesy, and invariably agenda-driven, but only accidentally grounded in responsible exegesis.
Not for a moment should we imagine that this is the first generation to make such mistakes. Every generation in this sin-addled world experiments with a variety of philosophical stances that can easily (sometimes unwittingly) be used to subvert what Scripture says—and thus the authority of Scripture is again domesticated. Students of history have learned to appreciate the contributions of, say, Aristotelianism, Platonism, Gnosticism, Thomism, Cartesianism, Rationalism—but also to allow Scripture’s voice to stand over them. It is more challenging to avoid the pitfalls lurking in the “isms” that are current.
10. Anything That Reduces Our Trembling before the Word of God
“These are the ones I look on with favor: those who are humble and contrite in spirit, and who tremble at my word” (Isa 66:2). “‘All people are like grass, and all their glory is like the flowers of the field; the grass withers and the flowers fall, but the word of the Lord endures forever.’ And this is the word that was preached to you” (1 Pet 1:24–25; cf. Isa 40:6–8).
The things that may sap our ability to tremble before God’s word are many. Common to all of them is arrogance, arrogance that blinds us to our need to keep reading and re-reading and meditating upon the Bible if we are truly to think God’s thoughts after him, for otherwise the endless hours of data input from the world around us swamp our minds, hearts, and imaginations. Moral decay will drive us away from the Bible: it is hard to imagine those who are awash in porn, or those who are nurturing sexual affairs, or those who are feeding bitter rivalry, to be spending much time reading the Bible, much less trembling before it. Moreover, our uncharitable conduct may undermine the practical authority of the Bible in the lives of those who observe us. Failure to press through in our studies until we have happily resolved some of the intellectual doubts that sometimes afflict us will also reduce the fear of the Lord in us, a subset of which, of course, is trembling before his Word.
11. Concluding Reflections
So that concludes our list of subtle ways to abandon the authority of Scripture in our lives. I’m sure these ten points could be grouped in other ways, and other points could usefully be added.
But I would be making a serious mistake if I did not draw attention to the fact that this list of warnings and dangers, an essentially negative list, implicitly invites us to a list of positive correlatives. For example, the first instance of subtle ways to trim the authority of Scripture was “an appeal to selective evidence”—which implicitly calls us to be as comprehensive as possible when we draw our theological and pastoral conclusions about what the Bible is saying on this or that point. If “heart embarrassment” before this or that text (the second example) reduces the authority of Scripture in my life, a hearty resolve to align my empathies and will with the lines of Scripture until I see more clearly how God looks at things from his sovereign and omniscient angle will mean I offer fewer apologies for the Bible, while spending more time making its stances coherent to a generation that finds the Bible profoundly foreign to contemporary axioms. It would be a godly exercise to work through all ten of the points so as to make clear what the positive correlative is in each case.
0 notes
dareread · 8 years ago
Link
The essence of capitalism is not strictly capital. In the modern sense, the word capital has taken on other meanings, often where money is given as a substitute for it. When speaking about things like “hot money”, for instance, you wouldn’t normally correct someone referencing it in terms of “capital flows.” Someone that “commits capital” to a project is missing some words, for in the proper sense they are “committing funds to capital.”
Just as the focus has been removed from actual capital, and thus a distortion of capitalism, one of the effects has been to devalue the other component that actually makes it all work. Rising living standards were never the fruit of capital alone, as the real strength was in the combination of it with labor. Over the last few decades, the real capital flow has been with eurodollar finance to the offshoring of productive capacity.
By simple mathematics, businesses are no longer willing to afford labor. Before getting to that math, however, we need to be mindful that the “experts” are almost uniformly suggesting the opposite is true. Instead, we hear constantly of a labor shortage, often serious, whether due to Baby Boomers retiring, lazy Americans addicted to heroin, or the politics of immigration. The problem with all of these is wages, meaning that if there was a shortage, wages would be rising and rising rapidly.
The New York Times on Sunday published yet another of this type of account (they are becoming more frequent), blatantly headlining the piece, Lack of Workers, Not Work, Weighs on the Nation’s Economy. Focusing on anecdotes from Utah, you get all the familiar but unbacked tropes about the travails of employers who have things to do but can’t do them because they can’t find anyone.
To Todd Bingham, the president of the Utah Manufacturers Association, “3.1 percent unemployment is fabulous unless you’re looking to hire people.”
“Our companies are saying, ‘We could grow faster, we could produce more product, if we had the workers,’” he said. “Is it holding the economy back? I think it definitely is.”
Apologies to Mr. Bingham, but that’s just stupid. What we find out instead, buried within the article, is a separate story that actually contradicts the whole thing for the umpteenth time.
Companies in Utah, as in the rest of the country, were slow to raise wages in recent years. At first there were plenty of available workers. But by the end of 2015, a report by Utah’s Department of Workforce Services concluded that inadequate wages had become a key reason companies were struggling to find employees.
“It was as if employers hadn’t adjusted their approach to the labor market” as the economy recovered, said Carrie Mayne, the department’s chief economist.
Maybe for employers the economy hasn’t actually recovered. What the Times article contends instead is that somehow starting in 2016 of all times this maybe has changed. There are “signs” of wage acceleration out there, because why wouldn’t there be if the unemployment rate nationally has fallen to 4.5%? The mainstream is always seeing signs of wage growth, a condition that people long ago ignored because you can’t claim to be seeing “signs” of wage acceleration for now a fourth year running. Those aren’t then signs but delusions of clear bias. Instead, it is far more plausible employers don’t really have any ability to adjust wages.
Tumblr media Tumblr media
It’s as if the people who write these articles and even the economists who stand them up have no conception of revenue and expenses. Anyone will take all the cheapest possible labor they can get, but it takes an actually growing economy with widespread, plausibly sustainable gains to give employers an impetus to actually pay market-clearing rates. That is the simple, small “e” economics left off the pageviews. “We could produce more product” has never in the past stopped businesses from paying up for labor, but it does now because there is no actual economic growth.
GDP is positive, to be sure, but that is linear thinking that leads to all these misconceptions. The newspapers all say the economy is growing, and yet all these contradictions exist (including social and political unrest) as if it was not. Growth is not merely the positive sign in front of some statistic, even GDP. As stated above, the math is incredibly simple:
Tumblr media Tumblr media
Businesses have been maintaining, nationally, at best, steady cash flow by not paying more for wages or workers; they simply cannot. When in 2008 they initially responded to the Great “Recession” by laying off Americans at the most devastating pace since the 1930’s, they did so to retain as much cash flow as possible. That part was as of every recession in economic history. Where it all went off course was in what followed, which by any objective standard was not a recovery.
Opportunity is the lifeblood of recovery, and following even the most devastating recession there is usually widespread opportunity. The two, in fact, go hand in hand, the economic equivalent of the old market adage to buy when there is blood in the streets. What really happened in 2009 and after?
Is may seem like a chicken and egg problem, but it really isn’t. By that I mean, do businesses hire more workers so that they can buy more things (including services), or must consumers, who are workers, buy more things first so that businesses hire more workers who are consumers? What if neither do either? That’s what this “recovery” has been, a grave reluctance on every side because at its most basic level the Great “Recession” was not a recession. Economists may finally know it now after ten years, but on an individual level that is how both businesses and consumers had been behaving all along. Simply calling it a recovery as every official has done does not merely make it so.
Thus, the lack of wage growth is simple mathematics, the economic equivalent of low interest rates. It is the “tight money” of the real economy where lack of wage growth, tremendous labor slack, and this unassailable business reluctance are all varying degrees of liquidity preferences. Why bump up wages to a market-clearing level when economic growth, meaning opportunity, is so conspicuously missing no matter how hard (particularly before 2016) the mainstream emphasizes it isn’t.
Tumblr media Tumblr media
There is, of course, more nuance to this aggregate picture, broken down across several cross sections like business size (there was something like a recovery for large businesses, but absolutely nothing like one for small firms). In whatever case, the idea that “things are better” requires both gradation as well as qualification. Where you could in the past just say things are better, you cannot today make that statement. Better than what, 2009? That’s a meaningless comparison now, just as it would have been in 2011. 2007? Even that is debatable when after ten years the economy today should be nothing like the one then, and yet it many ways it has still to catch back up (think about what it was like in 1997 as compared to 1987).
Tumblr media Tumblr media Tumblr media Tumblr media
In the end, all this confusion exists because the wrong measurements are employed starting with the wrong linear perspective. GDP was never meant to quantify shrinking; in fact, nothing is, which is why we have such difficulty measuring just how badly the economy has performed during a decade otherwise clearly lost. Some people see the 4.5% unemployment rate and expect it to mean 4.5% unemployment. And yet, there are only signs that wages maybe at some point might want to possibly accelerate. Such convolution just isn’t necessary when simple math will do.
0 notes
dareread · 8 years ago
Link
If you ever wondered what life was really like in a post-collapse society, look no further than Venezuela. Today, I’d like to share a first-hand report of everyday life there.
The country has been on the way down since a socialist government destroyed the economy. Here’s a quick timeline:
Private ownership of guns was banned in 2012. Then things began to go downhill in a hurry.
In 2013, preppers were relabeled “hoarders” and the act of stocking up became illegal.
In 2014, the government instituted a fingerprint registry for those who wished to buy food to ensure they didn’t take more than their “share.”
In 2015, things began to devolve more quickly as electricity began to be rationed and farmers were forced to turn over their harvests to the government.
2016 brought the announcement that folks were on their own – there was simply not enough food. As well, despite the rationing, an electricity shortage was announced.
2016 also brought the news that the country was out of everything: food, medicine, and nearly all basic necessities. People were dying of starvation and malnourishment made other illnesses even worse. Hyperinflation brought exorbitant prices, like $150 for a dozen eggs.
Now, civil war is near (if not already happening.) They’re calling it “protests” but violence between the people and the government is ongoing. This rage is stoked by wealthy Venezuelans who enjoy luxurious meals, fabulous parties, and lush accommodations while the rest of the country struggles to find a bag of rice they can afford. Let them eat cake?
It appears there is no end in sight to the tribulations of the Venezuelans.
So, what is day-to-day life like for the average Venezuelan?
A reader from Venezuela took the time to comment and tell us what life is really like there. You can find her story below. (I’ve edited for spacing to make them easier to read, but please keep in mind that English is a second language.)
Daisy Thank you so much for this content.
I’m a venezuelan mom of a 1 year old baby. And we are living a war here .
You can’t go outside to buy food or supplies or medication because each activity is a high risk and more with a baby. So I stay home as much as I can. There are a lot people outside trying to live normaly, trying to go to work and buy foods and continue there lives. But when you are working or whatever thing your doing you dont know is you will be able to come back home safe… people continues to work to get whatever miserable pay to buy some food. Everything is so expensive. Perhaps the beans and rice are affordable but still not cheap and is so hard to find food. options are limited because of the price… you can only buy one item or two of pasta,
Everything is so expensive. Perhaps the beans and rice are affordable but still not cheap and is so hard to find food. options are limited because of the price… you can only buy one item or two of pasta, rice, like I say the less expensive food , and you have to wait in long long lines at your own risk because there are a lot of fights in this store. imagine tones of people wanting to buy the same product. this have being worst since perhaps about 5 years now… because of scarcity. Malls and big stores are
Malls and big stores are basicly alone because there are places where “colectivos” use to attack, with bombs and there is a group of about 40 men in motorcicles that have been creating chaos in the whole city, every day the take the city they have plenty of arms and the just go through the city shooting building houses, stealing stores, people on the streets, batteries of cars, everything the want… this situation is far worst that we ever imagine. they kill people every day and they are pay mercenaries from the goverment because no one does nothing. the
the goverment people is also killing inocents. kids… teenagers, the youth. we are panicking. We bough a land 3 years ago. I got pregnant and we decided to wait. the land is a safe zone but is 5 hours from here. is a very small town 11. 000 people. I live ib a city with about 3 million people. at least for now the town are peaceful but there isnt electricity and the isnt an asfalt road so this doesnt sound lovely for thief that are now looking for biggest fish to cash. according to our neighbours the town is in calm nothing has happen. We need to go as soon as possible. I am scared because what you say is also true. But in the city there are no options at least not now. . what do you thnk we should do? I realize appreciate what you recommend.
I’m sorry for my bad english I’m trying to write this while playing with my baby.
Venezuelan houses are already bunkers. This has being like this ever since I can remember. Perhaps 20 years… Every regular house is made like bunkers. Pure concrete from the botton to the walls to everything. Every house has also 2 to 3 security doors really big and heavy ones and on top of that we have fences and electric fences on top Of BIG GATES.
And trust me is not enough to be safe.
The army has damaged gates and has entry to different houses looking for students, or rebelds… and also innocents people has died because they were sadly in the middle of this events. I don’t doubt that country land might have problems too. But so far cities are pure anarky and maddness.
There are several groups creating chaos… the army, the mercenaries, the thiefs, and the rebels that want to kill chavismo and politics and whatever on their way. A few days they put a bomb into a propane gas distribution cargo and it blew away and this have being affecting every single thing, they steal cargo transporting food or gas and even fuel… so there is no much to do now.
Communities in the city are not organize since they really need to find food and basic resources so each indivual is waiting in long lines to buy a bread or a medicine or whatever they need… and people doesnt want to organize they dont see this could go like this for years… I assume that we got used to live in some sort of chaos and violence.
But the true is most people is praying and just wishing this will pass soon. that this will pass as the moment a new presindent arrive.
And what if it doesnt? Let’s be clear that we have so many resources, and Canadá and USA are pulling all of the gold and all minerals now so no country really matters what we are suffering as long as they can get they way.
We really feel hopeless.
Dear Ale,
Thank you for sharing your story. Here’s my advice:
If you can safely get out of the city, the time to do it is now. When I said I was moving from the country into town, the move was not to a major city with millions of people, but a much smaller one. I went from living a mile from my neighbors to a suburban neighborhood where we chat over the fence and share fresh vegetables and barbecues.
In a small town like you describe, you will have neighbors, hopefully some unity, and be able to be more self-reliant without as much risk as the place where you live now. Being on your own with a small child in a situation like this is hard and dangerous. Try to make friends so that you have some support. If you have extended family, consider that relocating to them might be another option.
The journey will be difficult, but I sincerely hope that you will be able to get there with your baby and find some peace and safety. Please know that you and your child will be in many hearts and prayers after this.
Keep us posted if you can.
Love,
Daisy
0 notes
dareread · 8 years ago
Link
Skin in the Game is necessary to reduce the effects of the following divergences that arose mainly as a side effect of civilization: action and cheap talk (tawk), consequence and intention, practice and theory, honor and reputation, expertise and pseudoexpertise, concrete and abstract, ethical and legal, genuine and cosmetic, entrepreneur and bureaucrat, entrepreneur and chief executive, strength and display, love and gold-digging, Coventry and Brussels, Omaha and Washington, D.C., economists and human beings, authors and editors, scholarship and academia, democracy and governance, science and scientism, politics and politicians, love and money, the spirit and the letter, Cato the Elder and Barack Obama, quality and advertising, commitment and signaling, and, centrally, collective and individual.
But, to this author, is mostly about justice, honor, and sacrifice as something existential for humans.
Let us first connect a few dots of items the list above.
Antaeus Whacked
Antaeus was a giant, rather semi-giant of sorts, the literal son of Mother Earth, Gaea, and Poseidon the god of the sea. He had a strange occupation, which consisted of forcing passersby in his country, (Greek) Libya, to wrestle; his trick was to pin his victims to the ground and crush them. This macabre hobby was apparently the expression of filial devotion; Antaeus aimed at building a temple for his father Poseidon, using for material the skulls of his victims.
Antaeus was deemed to be invincible; but there was a trick. He derived his strength from contact with his mother, earth. Physically separated from contact with earth, he lost all his powers. Hercules, as part of his twelve labors (actually in one, not all variations), had for homework to whack Antaeus. He managed to lift him off the ground and terminated him by crushing him as his feet remained out of contact with his mamma.
What we retain from this first vignette is that, like Antaeus, you cannot separate knowledge from contact with the ground. Actually, you cannot separate anything from contact with the ground. And the contact with the real world is done via skin in the game –have an exposure to the real world, and pay a price for its consequences, good or bad. The abrasions of your skin guide your learning and discovery, a mechanism of organic signaling, what the Greeks called pathemata mathemata (guide your learning through pain, something mothers of young children know rather well). Most things that we believe were “invented” by universities were actually discovered by tinkering and later legitimized by some type of formalization. I have shown in Antifragile how the knowledge we get by tinkering, via trial and error, experience, and the workings of time, in other words, contact with the earth, is vastly superior to that obtained through reasoning, something universities have been very busy hiding from us.
Libya After Antaeus
Second vignette. As I am writing these lines, a few thousand years later, Libya, the putative land of Antaeus now has a slave market, as a result of a failed attempt of what is called a “regime change” in order to “remove a dictator”.
A collection of people classified as interventionists (t0 name names, Bill Kristol, Thomas Friedman, and others) who promoted of the Iraq invasion of 2003, as well as the removal of the Libyan leader, are advocating the imposition of additional such regime change on another batch of countries, which includes Syria, because “it has a dictator”.
These interventionistas and their friends in the U.S. State Department helped create, train, and support, Islamist rebels, then “moderates”, but who eventually evolved to become part of Al-Qaeda, the same Al-Qaeda that blew up the New York City towers during the events of Sep 11 2001. They mysteriously failed to remember that Al-Qaeda itself was composed of “moderate rebels” created (or reared) by the U.S. to help fight Soviet Russia because, as we will see, these educated people’s reasoning doesn’t entail such recursions.
So we tried that thing called regime change in Iraq, and failed miserably. We tried it in Libya, and now there are now active slave markets in the place. But we satisfied the objective of “removing a dictator”. By the exact same reasoning, a doctor would inject a patient with “moderate” cancer cells “to improve his cholesterol numbers”, and claim victory after the patient is dead, particularly if the post-mortem shows remarkable cholesterol readings. But we know that doctors don’t do that, or, don’t do it in such a crude format, and that there is a clear reason for it. Doctors usually have some skin in the game.
And don’t give up on logic, intellect and education, because a tight but higher order logical reasoning would show that the logic of advocating regime changes implies also advocating slavery. So these interventionistas not only lack practical sense, and never learn from history, but they even make mistakes at the pure reasoning level, which they drown in some form of semi-abstract discourse.
Their three flaws: 1) They think in statics not dynamics, 2) they think in low, not high dimensions, 3) they think in actions, never interactions.
The first flaw is that they are incapable in thinking in second steps and unaware of the need for it –and about every peasant in Mongolia, every waiter in Madrid, and every car service operator in San Francisco knows that real life happens to have second, third, fourth, nth steps. The second flaw is that they are also incapable of distinguishing between multidimensional problems and their single dimensional representations –like multidimensional health and its stripped, cholesterol-reading reduced representation. They can’t get the idea that, empirically, complex systems do not have obvious one dimensional cause and effects mechanisms, and that under opacity, you do not mess with such a system. An extension of this defect: they compare the actions of the “dictator” to the prime minister of Norway or Sweden, not to those of the local alternative. The third flaw is that they can’t forecast the evolution of those one helps by attacking.
And when a blow up happens, they invoke uncertainty, something called a Black Swan, after some book by a (very) stubborn fellow, not realizing that one should not mess with a system if the results are fraught with uncertainty, or, more generally, avoid engaging in an action if you have no idea of the outcomes. Imagine people with similar mental handicaps, who don’t understand asymmetry, piloting planes. Incompetent pilots, those who cannot learn from experience, or don’t mind taking risks they don’t understand, may kill many, but they will themselves end up at the bottom of, say, the Atlantic, and cease to represent a threat to others and mankind.
So we end up populating what we call the intelligentsia with people who are delusional, literally mentally deranged, simply because they never have to pay for the consequences of their actions, repeating modernist slogans stripped of all depth. In general, when you hear someone invoking abstract modernistic notions, you can assume that they got some education (but not enough, or in the wrong discipline) and too little accountability.
Now some innocent people, Yazidis, Christian minorities, Syrians, Iraqis, and Libyans had to pay a price for the mistakes of these interventionistas currently sitting in their comfortable air-conditioned offices. This, we will see, violates the very notion of justice from its pre-biblical, Babylonian inception. As well as the ethical structure of humanity.
Not only the principle of healers is first do no harm (primum non nocere), but, we will argue: those who don’t take risks should never be involved in making decisions.
This idea is weaved into history: all warlords and warmongers were warriors themselves and, with few exceptions societies were run by risk takers not risk transferors. They took risks –more risks than ordinary citizens. Julian the Apostate, the hero of many, died on the battlefield fighting in the never-ending war on the Persian frontier. One of predecessors, Valerian, after he was captured was said to have been used as a human footstool by the Persian Shahpur when mounting his horse. Less than a third of Roman emperors died in their bed –and one can argue that, had they lived longer, they would have fallen prey to either a coup or a battlefield.
And, one may ask, what can we do since a centralized system will necessarily need people who are not directly exposed to the cost of errors? Well, we have no choice, but decentralize; have fewer of these. But not to worry, if we don’t do it, it will be done by itself, the hard way: a system that doesn’t have a mechanism of skin in the game will eventually blow up and fix itself that way. We will see numerous such examples.
For instance, bank blowups came in 2008 because of the hidden risks in the system: bankers could make steady bonuses from a certain class of concealed explosive risks, use academic risk models that don’t work (because academics know practically nothing about risk), then invoke uncertainty after a blowup, some unseen and unforecastable Black Swan, and keep past bonuses, what I have called the Bob Rubin trade. Robert Rubin collected one hundred million dollar in bonuses from Citibank, but when the latter was rescued by the taxpayer, he didn’t write any check. The good news is that in spite of the efforts of a complicit Obama administration that wanted to protect the game and the rent-seeking of bankers, the risk-taking business moved away to hedge funds. The move took place because of the overbureaucratization of the system. In the hedge fund space, owners have at least half of their net worth in the funds, making them more exposed than any of their customers, and they personally go down with the ship.
The interventionistas case is central to our story because it shows how absence of skin in the game has both ethical and epistemological effects (i.e., related to knowledge). Interventionistas don’t learn because they are not the victims to their mistakes. Interventionistas don’t learn because they are not the victims of their mistakes, and, as we saw with pathemata mathemata:
The same mechanism of transferring risk also impedes learning.
0 notes
dareread · 8 years ago
Link
A twentieth-century repetition of the mistakes of ancient Rome would be inexcusable.Rome was eight and a half centuries old when the poet, Juvenal, penned his famous tirade against his degenerate countrymen. About 100 A.D. he wrote: “Now that no one buys our votes, the public has long since cast off its cares; the people that once bestowed commands, consulships, legions and all else, now meddles no more and longs eagerly for just two things, bread and circuses.” (Carcopino, Daily Life in Roman Times [New Haven, Yale University Press, 1940], p. 202.) Forty years later, the Roman historian, Fronto, echoed the charge in more prosaic language: “The Roman people is absorbed by two things above all others, its food supplies and its shows.” (Ibid.)
Here was a once-proud people, whose government had been their servant, who had finally succumbed to the blandishments of clever political adventurers. They had gradually relinquished their sovereignty to government administrators to whom they had granted absolute powers, in return for food and entertainment. And the surprising thing about this insidious progression is that, at the time, few realized that they were witnessing the slow destruction of a people by a corruption that would eventually transmute a nation of self-reliant, courageous, sovereign individuals into a mob, dependent upon their government for the means of sustaining life.
There are no precise records that describe the feelings of those for whom the poet, Juvenal, felt such scorn. But using the clues we have, and judging by our own experience, we can make a good guess as to what the prevailing sentiments of the Roman populace were. If we were able to take a poll of public opinion of first and second century Rome, the overwhelming response would probably have been—“We never had it so good.” Those who lived on “public assistance” and in subsidized rent-free or low-rent dwellings would certainly have assured us that now, at last, they had “security.” Those in the rapidly expanding bureaucracy—one of the most efficient civil services the world has ever seen—would have told us that now government had a “conscience” and was using its vast resources to guarantee the “welfare” of all of its citizens; that the civil service gave them job security and retirement benefits; and that the best job was a government job! Progressive members of the business community would have said that business had never been so good, that the government was their largest customer, which assured them a dependable market, and that the government was inflating currency at about 2 per cent a year, which instilled confidence and gave everyone a sense of well-being and prosperity.
And no doubt the farmers were well pleased too. They supplied the grain, the pork and the olive oil, at or above parity prices, for the government’s doles.
The government had a continuous program of large-scale public works which were said to stimulate the economy, provide jobs and promote the general welfare, and which appealed to the national pride.
The high tax rates required by the subsidies discouraged the entrepreneur with risk capital which, in turn, favored the well-established, complacently prosperous businessman. It appears that there was no serious objection to this by any of the groups affected. An economic historian, writing of business conditions at this period, says, “The chief object of economic activity was to assure the individual, or his family, a placid and inactive life on a safe, if moderate, income . . . . There were no technical improvements in industry after the early part of the second century.” There was no incentive to venture. Inventions began to dry up because no one could reasonably expect to make a profit out of them.
Rome was sacked by Alaric and his Goths in 410 A.D. But long before the barbarian invasions, Rome was a hollow shell of the once noble Republic. Its real grandeur was gone and its people were demoralized. Most of the old forms and institutions remained. But a people whose horizons were limited by bread and circuses had destroyed the spirit while paying lip service to the letter of their once hallowed traditions.
The fall of Rome affords a pertinent illustration of the observation by the late President Lowell of Harvard University that “no society is ever murdered—it commits suicide.”
I do not imply that bread and circuses are evil things in themselves. Man needs material sustenance and he needs recreation. These needs are so basic that they come within the purview of every religion. In every religion there is a harvest festival of thanksgiving for good crops. And as for recreation, we need only recall that our word “holiday” was originally “holy day,” a day of religious observance. In fact, the circuses and games of old Rome were religious in origin. The evil was not in bread and circuses, per se, but in the willingness of the people to sell their rights as free men for full bellies and the excitement of the games which would serve to distract them from the other human hungers which bread and circuses can never appease. The moral decay of the people was not caused by the doles and the games. These merely provided a measure of their degradation. Things that were originally good had become perverted and, as Shakespeare reminds us, “Lilies that fester smell far worse than weeds.”
More than fifty years ago, the great historian of Rome, Theodore Mommsen, came to our country on a visit. At a reception in his honor, someone asked him, “Mr. Mommsen, what do you think of our country?” The great scholar replied, “With two thousand years of European experience before your eyes, you have repeated every one of Europe’s mistakes. I have no further interest in you.”
One wonders what Mommsen would say today in the light of the increasingly rapid destruction of our traditional values during the past 25 years.
Many of our people have been converted to the idea that liberty has been tried and found wanting, just as many believe that Christianity has been tried and found wanting. They do not know that what has been found wanting is not the true values of liberty and religion but only perversions, worthless counterfeits. So when we urge upon them those true values, they shy away. They have been fooled before, so they want to try something which they think is “new.”
How far have we departed from our traditional values? There is no mystery here. It is well known that the basic policies of the two major political parties with respect to the intrusion of the State into the economic and social lives of the people differ only in degree and method. There is no discernible difference in fundamental principle. Prominent political figures of both parties pay lip service to the letter of our Declaration of Independence and Constitution, while they violate the spirit.
The proponents of an all-powerful centralized government have erected a bureaucratic colossus which imposes upon our people controls, regimentation, punitive taxation and subsidies to pressure groups, thus paralleling the “organized mendicancy, subvention, bureaucracy and centralization” which played so great a part in the downfall of Rome!
We are demoralized by an indecent competition. Each one denounces government handouts and privileges for the other fellow—but maintains that his special privilege is for the “general welfare.” The slogan of many of us seems to be, “Beat the other fellow to the draw”—i.e., “draw out of the public treasury more than you put in, before someone else gets it.”
I am no prophet of inevitable doom. On the contrary, I am sounding an alarm that disaster lies ahead unless present danger signals are heeded.
What specific steps should we take? I believe that neither I nor anyone else, no matter how exalted his position, can determine for 165 million people their day-to-day economic and social decisions concerning such matters as wages, prices, production, associations and others. So I propose that these decisions, and the problems connected therewith, be returned to the people themselves. This could be done in four steps, as follows:
First—Let us stop this headlong rush toward collectivism. Let there be no more special privileges for employers, employees, farmers, businessmen or any other groups. This is the easiest step of all. We need only refrain from passing more socialistic laws.
Second— Let us undertake at once an orderly demobilization of many of the existing powers of government by the progressive repeal of those socialistic laws which we already have. This will be a very difficult step because every pressure group in the nation will fight to retain its subsidies, monopoly privileges and protection. But if freedom is to live, all special privileges must go.
Third—Of the powers that remain in government, let us return as many as possible to the states. For on the local level, the people will be able to apply more critical scrutiny to the acts of their government agents.
Fourth—Above all, let us resolve that never again will we yield to the seduction of the government panderer who comes among us offering “bread and circuses,” paid for with our own money, in return for our sovereign rights!
Admiral Ben Moreell (1892 – 1978) was the chief of the U.S. Navy’s Bureau of Yards and Docks and of the Civil Engineer Corps. Best known to the American public as the Father of the Navy’s Seabees, Moreell’s life spanned eight decades, two world wars, a great depression and the evolution of the United States as a superpower. He was a distinguished Naval Officer, a brilliant engineer, an industrial giant and articulate national spokesman.
0 notes