Tumgik
#the issue is NOT critique so much as consistency in how one approaches analyzing the media/plot/character etc
detectiveneve · 1 year
Text
accidentally clicked on the for you page and got hit with the most annoying video game opinions. that was so scary.
9 notes · View notes
bluewatsons · 4 years
Text
Niall Shanks et al., Are animal models predictive for humans?, 4 Philos Ethics Humanit Med (2009)
Abstract
It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.
Review
"When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean – neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things." Lewis Carroll in Through the Looking Glass 1871.
There is a serious scientific controversy concerning the predictive power of animal models. In this article we will use the phrase animal model to mean, or the word animal in the context of, the use of a nonhuman animal, usually a mammal or vertebrate to predict human response to drugs and disease. We enthusiastically acknowledge that animals can be successfully used in many areas of science, such as in basic research, as a source for replacement parts for humans, as bioreactors and so forth. These uses are not included in our definition or critique as there is no claim made for their predictive power. This article focuses solely on using animals/animal models to predict human responses in light of what the word predict means in science.
Philosophy of science
The meaning of words is very important in all areas of study but especially science.
Philosophers of science including Quine, Hempel and others have argued that words must have meaning in science and in fact these meanings separate science from pseudoscience. Take for example the word prediction. A research method need not be predictive to be used but if one claims predictive ability for the test or project, then one means something very specific.
This paper addresses the use of the word predict as applied to animal models. It is our position that the meaning of the word has been corrupted and hence the concept behind the word is in danger as well as everything the concept implies. Predict is not so much a word as a concept and is closely related to hypothesis. Hypothesis can be defined as a proposed explanation for a phenomenon, either observed or thought, that needs to be tested for validity. According to Sarewitz and Pielke:
In modern society, prediction serves two important goals. First, prediction is a test of scientific understanding, and as such has come to occupy a position of authority and legitimacy. Scientific hypotheses are tested by comparing what is expected to occur with what actually occurs. When expectations coincide with events, it lends support to the power of scientific understanding to explain how things work. " [Being] predictive of unknown facts is essential to the process of empirical testing of hypotheses, the most distinctive feature of the scientific enterprise," observes biologist Francisco Ayala (Ayala, F. 1996. The candle and the darkness Science 273:442.)[1] (Emphasis added.)
In the case of animal models what actually occurs is what happens in humans. If the purpose of the test, be it a test on an animal or in silico, is to predict human response then the tests must be evaluated by how well it conforms to human response. We again acknowledge that not all tests and studies involving animals are done with prediction in mind. Nevertheless, those tests promoted as being predictive must be judged by how well they predict human response.
Sarewitz and Pielke continue:
Second, prediction is also a potential guide for decision making. We may seek to know the future in the belief that such knowledge will stimulate and enable beneficial action in the present [1].
We will return to decision making.
The philosopher W.V.O. Quine has remarked:
A prediction may turn out true or false, but either way it is diction: it has to be spoken or, to stretch a point, written. Etymology and the dictionary agree on this point. The nearest synonyms "foresight," "foreknowledge," and "precognition" are free of that limitation, but subject to others. Foreknowledge has to be true, indeed infallible. Foresight is limited to the visual when taken etymologically, and is vague otherwise. "Precognition" connotes clairvoyance ... Prediction is rooted in a general tendency among higher vertebrates to expect that similar experiences will have sequels similar to each other [[2]159] ... (Emphasis added.)
Predictions, generated from hypotheses, are not always correct. But if a modality or test or method is said to be predictive then it should get the right answer a very high percentage of the time in the biological sciences and all the time in the physical sciences. (We will discuss this as it applies to the biological sciences momentarily.)
If a modality consistently fails to make accurate predictions then the modality cannot be said to be predictive simply because it occasionally forecasts a correct answer. The above separates the scientific use of the word predict from the layperson's use of the word, which more closely resembles the words forecast, guess, conjecture, project and so forth. We will return to these points shortly
Many philosophers of science think a theory (and we add, a modality) could be confirmed or denied by testing the predictions it made. Unlike the physical sciences, the biological sciences, which study complex systems, must rely on statistics and probability when assessing what the response to a stimulus will be or in discussing the likelihood a phenomenon will occur. There is an assumption that has taken on the trappings of a theory or perhaps an overarching hypothesis in the animal model community that results from experiments on animals can be directly applied to humans; that animal models are predictive. This has resulted in an unquestioned methodological approach; using animals as surrogate humans. Ironically, this hypothesis has not been questioned as hypotheses should be questioned in science, hence our calling it an overarching hypothesis. Whether or not the animal model per se can be used to predict human response can be tested and if the results have a high enough sensitivity, specificity, positive and negative predictive value then the hypothesis that animals can predict human response would be verified. If verified, then one could say that animal models are predictive for humans and if refuted then one could say animal models are not predictive for humans.
There are two very different ways animals and hypotheses are used in science. Hypotheses result in predictions that have to be tested thereby confirming or falsifying the hypothesis. Let us assume that a scientist is using animals to perform basic research. At the end of the series of animal experiments the investigator has, at most, a hypothesis about a likely human response to the same stimulus or substance, when allowances have been made for differences in body weight, exposure, and so on. The prediction that the hypothesis entails must then be tested, and this will require the gathering of human data. The prediction may be verified or it may be falsified in the light of such human data, but the evidential burden here cannot be evaded from the standpoint of basic scientific methodology. Nowhere in this use of animals to generate a hypothesis have animals been assumed predictive. LaFollette and Shanks have referred to the practice of using animals in this fashion as heuristic or hypothetical animal models (HAMs) [3,4].
This is in contrast to the hypothesis that some scientists start with, namely that animals are predictive for humans. (See table ​table1.)1.) By assuming this, these scientists make the claim that drugs and chemicals that would have harmed humans have been kept off the market secondary to results from animal tests. This is disingenuous unless we have a priori reason to assume animal models are predictive. The hypothesis was, in these cases, never tested. It would in many cases be unethical to conduct on humans the sorts of carefully controlled laboratory studies that are regularly conducted on, say, rodents. However, there are other, ethical ways to gain human data in the context of epidemiology (for example retrospective and prospective epidemiological studies), in vitro research using human tissue, in silico research, and the recent technological breakthrough of microdosing [5]. Further, it must never be forgotten that when industrial chemicals find their way into the environment, or drugs are marketed to the general population, human experiments have already taken place. Moreover, as Altman [6] has observed, there are many examples, both ancient and modern, where researchers, doubting the applicability or relevance of animal models to the human situation, have experimented on themselves – a practice that Altman points out continues to the present (recent Nobel laureate Barry Marshal being but one example). In any event, at the very least a track record of success (vis-à-vis positive and negative predictive values) using specific animal models should be evident if society is to accept hypotheses from animal testing as predictive for humans.
Tumblr media
Table 1. Hypothesis
Therefore, we must emphasize that when discussing animals as predictive models we are discussing the overarching hypothesis that animals are predictive, not the use of animals to generate a hypothesis such as occurs in basic research.
Now is an appropriate time to discuss the concept of proof and where the burden of proof lies in science. As in law, it lies on the claimant. The null hypothesis demands that we assume there is no connection between events until such causation is proven. Thus, those claiming animal models are predictive of human responses in the context of biomedical research must show that what they are claiming is true. The burden is not on us to prove that animal models of, say carcinogenesis or toxicity, are not predictive. It is the job of those advocating animal models as predictive to demonstrate they are. This will require a consideration of what the evidence actually shows.
While physics deals with simple systems upon which reductionism can be practiced, biology does not always have that luxury. There are areas in biology – for example comparative anatomy – where the use of scaling principles have had genuine applicability. But biology is not physics, and there are other places – for example in some branches of predictive toxicology – where the use of such scaling factors (such as body weight2/3) have been less than useful for reasons we will explore below. Biological systems are certainly consistent with the laws of physics, but they have properties consequent upon internal organization, ultimately rooted in evolutionary history, not found in physics. This means that even when the same stimulus is applied, end results can differ markedly. The response of different humans to the same drug or disease is a well-known example of this phenomenon [7-13]. There are however, ways to judge the predictive nature of tests even in biological complex systems. Values such as positive predictive value, sensitivity, specificity, and negative predictive value (we will discuss these values momentarily) can be calculated to confirm or refute hypotheses. Values from tests seeking to predict a response that approach what would be expected from random chance would obviously not fall into the category predictive.
Claims about the predictive nature of animal models
According to Salmon there are at least three reasons for making predictions:
because we want to know what will happen in the future;
to test a theory;
an action is required and the best way to choose which action is to predict the future [14].
In the case of carcinogenesis we want to know: (1) what will happen in the future (will the chemical cause cancer in humans?); and (3) an action is required (allow the chemical on the market or not?) and the best way to choose which action is to be able to predict the future. Neither (1) nor (3) is subtle. We want a correct answer to the question, "Is this chemical carcinogenic to humans?" or to similar questions such as, "What will this drug do to humans?" and "Is this drug a teratogen?" and "Is this the receptor used by HIV to enter the human cell?" But guessing correctly or finding correlations are not, as we have seen the same as predicting the answer. Neither is a high degree of sensitivity alone, as we shall see, the same as prediction.
The following will help the reader gain a feel for the contours of this scientific debate.
Butcher [15], Horrobin [16], Pound et al. [17] and others [3,4,18-24] have questioned the value of using animals to predict human response. Regardless, prediction is a problem. U.S. Secretary of Health and Human Services Mike Leavitt stated in 2007:
Currently, nine out of ten experimental drugs fail in clinical studies because we cannot accurately predict how they will behave in people based on laboratory and animal studies" [24].
This is a very damaging statement for those who assert that animals are predictive. For some, the facts behind this statement would, without further support, answer the prediction question. But we will continue.
This debate has recently expanded to Philosophy, Ethics, and Humanities in Medicine. Knight [25] recently questioned the use of chimpanzees in biomedical research citing among other reasons their lack of predictability. Shanks and Pyles [26] questioned the ability of animals to predict human response resulting in Vineis and Melnick [27] responding that animals can be used to predict human response to chemicals with reference to carcinogenesis and that epidemics of cancer could have been prevented if animal data had been used to reduce human exposure or ban the chemical entirely. This claim, of animals predicting human response, is not unique [28,29].
Gad wrote in Animal Models in Toxicology 2007:
Biomedical sciences' use of animals as models to help understand and predict responses in humans, in toxicology and pharmacology in particular, remains both the major tool for biomedical advances and a source of significant controversy ...
At the same time, although there are elements of poor practice that are real, by and large animals have worked exceptionally well as predictive models for humans-when properly used ...
Whether serving as a source of isolated organelles, cells or tissues, a disease model, or as a prediction for drug or other xenobiotic action or transformation in man, experiments in animals have provided the necessary building blocks that have permitted the explosive growth of medical and biological knowledge in the later half of the 20th century and into the 21st century ...
Animals have been used as models for centuries to predict what chemicals and environmental factors would do to humans ... The use of animals as predictors of potential ill effects has grown since that time [the year 1792].
Current testing procedures (or even those at the time in the United States, where the drug [thalidomide] was never approved for human use) would have identified the hazard and prevented this tragedy [29]. (Emphasis added.)
Fomchenko and Holland observe:
GEMs [genetically engineered mice] closely recapitulate the human disease and are used to predict human response to a therapy, treatment or radiation schedule [30]. (Emphasis added.)
Hau, editor of an influential handbook on animal-based research notes:
A third important group of animal models is employed as predictive models. These models are used with the aim of discovering and quantifying the impact of a treatment, whether this is to cure a disease or to assess toxicity of a chemical compound [31].
Clearly, Hau offers the use of animals as predictive models just as we are describing.
The prediction claim is also strong when the word prediction is not actually used but is implied or linked to causality. Fomchenko and Holland continue:
Using in vitro systems and in vivo xenograft brain tumor modeling provides a quick and efficient way of testing novel therapeutic agents and targets, knowledge from which can be translated and tested in more sophisticated GEMs that faithfully recapitulate human brain tumors and will likely result in high-quality clinical trials with satisfactory treatment outcomes and reduced drug toxicities. Additional use of GEMs to establish causal links between the presence of various genetic alterations and brain tumor initiation or determining their necessity for tumor maintenance and/or progression provide us with a glimpse into other important aspects of brain tumor biology [30]. (Emphasis added.)
Fomchenko and Holland are here clearly saying what happens in animals will happen in humans; that animals are predictive. Akkina is saying the same:
A major advantage with this in vivo system [genetically modified SCID mice] is that any data you get from SCID-hu mice is directly applicable to a human situation [32].
This use of prediction is not confined to the scientific literature. It is, if anything even more widespread when scientists are speaking to the nonscientist public.
The above examples could be multiplied without effort. Due to the ubiquitous nature of comments like the above, we can safely deduce that many in the scientific community use the word predict to mean that what happens in animal models will translate directly to humans. But is this a factual interpretation of reality?
Prediction in biological complex systems
What does constitute prediction in biological complex systems? Many justify the use of animals as predictive models by stating that animals are predictive but may not be reliably predictive. This seems to be oxymoronic. Reliably predictive would be a tautology and a method cannot be predictive, in science, if it is not reliably so. However, we acknowledge that biology is not physics so perhaps some leniency is needed when discussing prediction in biological complex systems. How then should we think of prediction in the context of toxicology, pathophysiology, and pharmacology? The 2 × 2 table for calculating sensitivity, specificity, positive predictive value and negative predictive value is how predictability is assessed in these contexts (see ​table22).
Tumblr media
Table 2. Statistics used in analysis of prediction.
In biology many concepts are best evaluated by using simple statistical methods involving probability. For example, in medicine we can use a blood test to determine whether someone has liver disease. In order to ascertain how well this test actually determines the health of the liver we calculate the sensitivity and specificity of the test along with the positive predictive value (PPV) and negative predictive value (NPV). The sensitivity of a test is the probability (measured on a scale from 0.0 to 1.0) of a positive test among people whose test should be positive – those who do in fact suffer from liver disease. Specificity is the probability of a negative test among people whose test should be negative – those without liver disease. The positive predictive value of a test is the proportion of people with positive test results who are actually positive. The negative predictive value is the proportion of people with negative test results who are actually negative. This is all quite straightforward. Very few tests have a sensitivity or specificity of 1.0 or a PPV and NPV of 1.0 but in order for a test to be useful given the demanding standards of medical practice, in this case tell us if the patient actually has liver disease, it needs to be have PPV and NPV in at least the .95 to 1.0 range.
By definition, when we speak of animals predicting human response in drug testing and disease research we are addressing the risks of wrong predictions and how much risk society is willing to tolerate. Troglitazone (Rezulin™) is a good example of the margin of error for medical practice tolerated in society today. Troglitazone was taken by well over 1 million people with less 1% suffering liver failure, yet the drug was withdrawn because of this side effect [33]. (Interestingly, animal studies failed to reproduce liver failure from troglitazone [34].) Rofecoxib (Vioxx™) is another example of the small percentage of morbidity or mortality tolerated in the practice of medicine vis-à-vis introducing a new drug. Figures vary, and are controversial, but it now appears that apparently less than 1% of people who took rofecoxib experienced a heart attack or stroke as a result, yet it was also withdrawn [35]. This means that even if a test with a PPV of .99 had assured industry that rofecoxib and troglitazone were safe, the test would not have been accurate enough for society's standards. This is an important point. Medical practice does not tolerate risks (probability of being wrong) acceptable in some experiments conducted in labs. In basic research we might proceed with a study based on the outcome being more likely than not. For basic research this is acceptable. However, getting the answer wrong in medical practice has consequences; people die. Societal standards for medical practice today demand very high sensitivity, specificity, PPV and NPV from its tests. We will apply the above to animal models shortly.
These standards of prediction, described above, should not be confused with those of other activities in society such as gambling in Las Vegas. If we worked out a method to be correct 51% of the time, we would gladly take that predictive ability to the blackjack table, the roulette wheel, and the craps table and leave with a fortune. Sometimes being correct 51% of the time is great!
In light of the above, it is common to use multiple tests when attempting to determine a patient's condition or evaluate a drug. If someone suggests that an animal, say a mouse, can predict human response to chemicals vis-à-vis carcinogenesis, he would need to provide data consistent with that needed for ​table 2. Perhaps not one animal alone is capable of predicting human response but when the same result occurs in two species, say and mouse and a monkey, then perhaps the results are predictive. Or perhaps animal data combined with other data translates into a high predictive value. Again, if this were the case the person making the claim should be able to provide data amenable to evaluation by the gold standard laid out in table ​table2.2. To the best of our knowledge no such data exists.
Predicting human response
We will now discuss the actual data that does exist. The data from testing six drugs on animals was compared with the data from humans [36]. The animal tests were shown to have a sensitivity of 0.52 and the positive predictive value was 0.31. The sensitivity is about what one would expect from a coin toss and the PPV less. Not what is considered predictive in the scientific sense of the word. Values of this nature are more appropriately referred to as guesses. Because of data like this, animal modelers will occasionally use the phrase concordance rate or true positive concordance rate when judging animal tests. These terms are not in the normal prediction-relevant lexicon and are usually used to mean correlation, which has nothing to do with prediction, as we will see.
Two studies from the 1990s revealed that: (1) in only 4 of 24 toxicities were found in animal data first [36]; and (2) in only 6 of 114 cases did clinical toxicities have animal correlates [37]. The sensitivity, specificity, PPV and NPV of animal models based on these studies are obviously suboptimal.
A 1994 study of 64 marketed drugs conducted by the Japanese Pharmaceutical Manufacturers Association found that 39/91 (43%) clinical toxicities were not forecast from animal studies [38]. (This study, as do many others, counted as a positive prediction when any animal correlated with the human response. This is disingenuous as it is cherry picking the data.) Without knowing the raw data it is impossible to calculate a true PPV and NPV but even taken at face value, 43% wrong/57% correct is not predictive.
​Figures 1 and ​and 2 illustrate graphically our contention that animals are not predictive. Both figures chart bioavailability data from three species of animals and compare it to data from humans. (Bioavailability is usually defined as the fraction of a drug that reaches the systemic circulation and reflects a number of different variables. Regardless of the correlation or lack thereof of the variables, the bioavailability of the drug is the final determinant of how much drug presents to the receptor or active site.) Figure ​Figure11 was compiled by Harris from a paper by Grass and Sinko in Advanced Drug Delivery Reviews. As the reader can see the bioavailability of various drugs is measured in humans and three species of animals (representing primates, rodents and dogs) and the results plotted. Some of the drugs that showed high levels of bioavailability in dogs had very low levels in humans and vice-versa. This was true regardless of drug or species. Some levels did correlate between species but as a whole there was no correlation between what a drug did in humans and what it did in any given animal species or any combination thereof.
Tumblr media
Figure 1. Human vs animal bioavailability 1. Graph generously provided by James Harris PhD, who presented it at the Center for Business Intelligence conference titled 6th Forum on Predictive ADME/Tox held in Washington, DC September 27–29, 2006 and is adapted from data that appeared in Grass GM, Sinko PJ. Physiologically-based pharmacokinetic simulation modelling. Adv Drug Deliv Rev. 2002 Mar 31;54(3):433–5.
Tumblr media
Figure 2. Human vs animal bioavailability 2. Graph generously provided by James Harris PhD, who presented it at the Center for Business Intelligence conference titled 6th Forum on Predictive ADME/Tox held in Washington, DC September 27–29, 2006 and is adapted from data that appeared in Arun K Mandagere and Barry Jones. Prediction of Bioavailability. In (Eds) Han van de Waterbeemd, Hans Lennernäs, Per Artursson, and Raimund Mannhold. Drug Bioavailability: Estimation of Solubility, Permeability, Absorption and Bioavailability (Methods and Principles in Medicinal Chemistry) Wiley-VCS 2003. P444–60.
​Figure 2 was complied by Harris from a book section by Mandagere and Jones in the book Drug Bioavailability: Estimation of Solubility, Permeability, Absorption and Bioavailability (Methods and Principles in Medicinal Chemistry) and made the same measurements and reached the same conclusions as did Grass and Sinko.
As you can see there is little correlation between animal and human data. In some cases human bioavailability is high when bioavailability in dogs is high but in other cases dogs and humans vary considerably. The patterns exhibited by both are what are frequently referred to as a shotgun pattern; meaning that if one fired a shotgun full of bird shot at a target one would see the same pattern. No precision and no accuracy. The pattern is also referred to as a scattergram, meaning that the pattern is what one would expect from random associations.
The above illustrates why eliminating drugs in development based on animal tests presents problems. Sankar in The Scientist 2005:
The typical compound entering a Phase I clinical trial has been through roughly a decade of rigorous pre-clinical testing, but still only has an 8% chance of reaching the market. Some of this high attrition rate is due to toxicity that shows up only in late-stage clinical trials, or worse, after a drug is approved. Part of the problem is that the toxicity is assessed in the later stages of drug development, after large numbers of compounds have been screened for activity and solubility, and the best produced in sufficient quantities for animal studies.
Howard Jacob notes that rats and humans are 90% identical at the genetic level. However, the majority of the drugs shown to be safe in animals end up failing in clinical trials. "There is only 10% predictive power, since 90% of drugs fail in the human trials" in the traditional toxicology tests involving rats. Conversely, some lead compounds may be eliminated due to their toxicity in rats or dogs, but might actually have an acceptable risk profile in humans [39]. (Emphasis added.)
Again, for some this alone would settle the prediction question. But we continue.
Sensitivity is not the same prediction. While it is true that that all known human carcinogens that have been adequately studied have been shown to be carcinogenic in at least one animal species [40-42], it is also true that an irreverent aphorism in biology known as Morton's Law states: "If rats are experimented upon, they will develop cancer." Morton's law is similar to Karnofsky's law in teratology, which states that any compound can be teratogenic if given to the right species at the right dosage at the right time in the pregnancy. The point being that it is very easy to find positive results for carcinogenicity and teratogenicity; a high sensitivity. Nonetheless, this is meaningless without also knowing specificity, positive predictive value, and negative predictive value.
Carcinogenesis
How well do animal models predict carcinogenesis? Possible carcinogens are listed in the Integrated Risk Information System (IRIS) chemicals database managed by the Environmental Protection Agency (EPA). According to Knight et al. [43] as of 1 January 2004, IRIS was unable to classify the carcinogenic status of 93 out of 160 chemicals that had been evaluated only by animal tests. The World Health Organisation also classifies chemicals according to carcinogenicity via the International Agency for Research on Cancer (IARC).
Knight et al. wrote in 2006:
For the 128 chemicals with human or animal data also assessed by the human carcinogenicity classifications were compatible with EPA classifications only for those 17 having at least limited human data (p = 0.5896). For those 111 primarily reliant on animal data, the EPA was much more likely than the IARC to assign carcinogenicity classifications indicative of greater human risk (p < 0.0001) [43].
This discrepancy is troublesome. Knight et al. discussed a study in 1993 by Tomatis and Wilbourn [44]. Tomatis and Wilbourn surveyed the 780 chemical agents or exposure circumstances listed within Volumes 1–55 of the IARC monograph series [45]. They found that "502 (64.4%) were classified as having definite or limited evidence of animal carcinogenicity, and 104 (13.3%) as definite or probable human carcinogens ... around 398 animal carcinogens were considered not to be definite or probable human carcinogens."
Knight et al. continue:
... based on these IARC figures, the positive predictivity of the animal bioassay for definite probable human carcinogens was only around 7% (104/502), while the false positive rate was a disturbing 79.3% (398/502) [43].
More-recent IARC classifications indicate little movement in the positive predictivity of the animal bioassay for human carcinogens. By January 2004, a decade later, only 105 additional agents had been added to the 1993 figure, yielding a total of 885 agents or exposure circumstances listed in the IARC Monographs [46]. Not surprisingly the proportion of definite or probable human carcinogens resembled the 1993 figure of 13.3%. By 2004, only 9.9% of these 885 were classified as definite human carcinogens, and only 7.2% as probable human carcinogens, yielding total of 17.1%.
Haseman [47] published a study in 2000 in which he revealed that 250 (53.1%) of chemicals in the NTP carcinogenicity database were carcinogenic in at least one sex-species group. He concluded that the actual number posing a significant carcinogenic risk to humans was probably far lower. Approximately half of all chemicals tested on animals and included in the comprehensive Berkeley-based potency carcinogenic database (CPDB) were carcinogenic [48].
Knight et al. conclude:
If a risk-avoidance interpretation is used, in which any positive result in male or female mice or rats is considered positive, then nine of the 10 known human carcinogens among the hundreds of chemicals tested by the NTP are positive, but so are an implausible 22% of all chemicals tested. If a less risk-sensitive interpretation is used, whereby only chemicals positive in both mice and rats are considered positive, then only three of the six known human carcinogens tested in both species are positive. The former interpretation could result in the needless denial of potentially useful chemicals to society, while the latter could result in widespread human exposure to undetected human carcinogens [43].
At this point in the debate, some will state that animal models can be useful in science and scientific research and attempt to conflate the word predict with the word useful. This is disingenuous for many reasons. First, useful is too ambiguous to mean anything. Useful to whom? Useful how? Almost anything can be useful in some sense of the word. If someone gets paid to engage in fortune telling then fortune telling is very useful to that person. Whether it can be used to predict the future is an entirely different question. We do not deny animal models can be quite useful in certain circumstances but this has nothing to do with whether they are predictive. Second, this is an example of bait and switch; sell animal models as predictive for humans then justify their use, since they are not predictive, because they are useful. Freeman and St Johnston illustrate this type of disingenuousness when they state:
Many scientists who work on model organisms, including both of us, have been known to contrive a connection to human disease to boost a grant or paper. It's fair: after all, the parallels are genuine, but the connection is often rather indirect. More examples will be discussed in later chapters [49].
Third, predict has a very specific meaning in science, indeed the concept of prediction is one thing that separate science from pseudoscience. By conflating useful and predict we diminish the respectability of science in general putting it more on the level of selling used cars. Finally, we again acknowledge that studying animals can lead to new knowledge. This point is not in dispute.
Thalidomide
Let us take and in depth look at one drug and the animal tests that could have been performed and evaluate what we would have learned from them. There are many examples of animal models giving results at extreme variance from humans and even from each other; thalidomide being but one, but thalidomide occupies a special place in history so we will use it. Thalidomide was a sedative prescribed to pregnant women in the late 1950 and early 1960s. The children of some of these women were born without limbs, a condition known as phocomelia. Could the thalidomide tragedy have been predicted and prevented on the basis of animal experimentation as Gad [29] and others have claimed? Consider the evidence. Schardein who has studied this tragedy has observed:
In approximately 10 strains of rats, 15 strains of mice, 11 breeds of rabbits, 2 breeds of dogs, 3 strains of hamsters, 8 species of primates and in other such varied species as cats, armadillos, guinea pigs, swine and ferrets in which thalidomide has been tested, teratogenic effects have been induced only occasionally [50].
We remind the reader that these results, and those below were from tests performed after thalidomide's affects had been observed in humans. Schardein also observes:
It is the actual results of teratogenicity testing in primates which have been most disappointing in consideration of these animals' possible use as a predictive model. While some nine subhuman primates (all but the bushbaby) have demonstrated the characteristic limb defects observed in humans when administered thalidomide, the results with 83 other agents with which primates have been tested are less than perfect. Of the 15 listed putative human teratogens tested in nonhuman primates, only eight were also teratogenic in one or more of the various species [51].
Manson and Wise summarized the thalidomide testing as follows:
An unexpected finding was that the mouse and rat were resistant, the rabbit and hamster variably responsive, and certain strains of primates were sensitive to thalidomide developmental toxicity. Different strains of the same species of animals were also found to have highly variable sensitivity to thalidomide. Factors such as differences in absorption, distribution, biotransformation, and placental transfer have been ruled out as causes of the variability in species and strain sensitivity [52].
Could the use of animal models have predicted thalidomide's adverse affects? Even if all the animals mentioned above were studied the answer is no. Different species showed a wide variety of responses to thalidomide. Once again, if you bet on enough horses you will probably find a winner or if you cherry pick the data you will find a winner. In the present case of thalidomide, human effects were already known so cherry picking is easy. The animal models for thalidomide discussed above were aimed at retroactively simulating known human effects. Even then not many animal models succeeded. If the human effects were unknown, what would the case have looked like from the standpoint of prediction? In this case, to pursue the horse racing analogy, we would have numerous horses to bet on without any idea which one would win. Certainly one will win (which is not a given when testing on animals in hopes of reproducing or guessing human response), but which one? We cannot know that until after the fact so how do we judge prospectively which horse to wager on or which animal model to choose? Which model species were relevant to the human case in advance of the gathering of human data? This is by no means a trivial question as evolutionary closeness does not increase the predictive value of the model. Caldwell points out that relatively small biological differences between test subjects can lead to very different outcomes:
It has been obvious for some time that there is generally no evolutionary basis behind the particular drug metabolizing ability of a particular species. Indeed, among rodents and primates, zoologically closely related species exhibit markedly different patterns of metabolism [53].
The thalidomide case illustrates why the overarching hypothesis that animals are predictive for humans is wrong. Again, this overarching hypothesis is in contrast to using animals as heuristic devices where the hypotheses drawn from them must be tested.
Even if we retrospectively picked all the animals that reacted to thalidomide as humans did, we still could not say these animals predicted human response as their history of agreeing with human response to other drugs varied considerably. Prediction vis-à-vis drug testing and disease research implies a track record. Single correct guesses are not predictions. Nonhuman primates are a good example of this. They more or less reacted to thalidomide as humans did (so we will give them the benefit of the doubt as corresponding to humans in this case). However, when tested with other drugs they predicted human response about as well as a coin toss. Add to all this the fact that all the animals whose offspring exhibited phocomelia consequent to the administration of thalidomide did so only after being given doses 25–150 times the human dose [54-56] and it does not appear that any animal, group of animals, or the animal model per se could have been used to predict thalidomide's teratogenicity in humans. (Ironically, it was the thalidomide tragedy that ushered in many of the regulatory requirements for using animals.)
Thalidomide's controversial history should not interfere with our analysis, as the history in question does not overlap with our issue. The controversy revolves around what animals were tested, whether pregnant animals were tested, what the drug company knew and when they knew it and so forth. This is immaterial, as we are analyzing the data as if it were available before releasing the drug. We are giving the animal model the maximum benefit of the doubt and what we find is that even if all the data available today had been available then, the decision to release the drug or not would not have been informed by animal tests. Karnofsky's law is relevant here. Any drug is teratogenic if given to the right animal at the right time. Given thalidomide's profile today, physicians would advise pregnant women not to take the drug, which is what physicians advise every pregnant woman about almost every nonlife-saving drug anyway, regardless of the results of animal tests.
The claim that thalidomide's affects were or could have been predicted by animals is an example of cherry picking the data.
The quantitative/qualitative controversy
We now move on to the quantitative/qualitative controversy. There is a tendency on the part of some researchers to see all differences between species as mere quantitative differences – presumably differences that can be compensated for in the context of prediction. Vineis and Melnick:
However, we disagree with Shanks and Pyles about the usefulness of animal experiments in predicting human hazards. Based on the darwinian observation of inter-species and inter-individual variation in all biological functions, Shanks and Pyles suggest that animal experiments cannot be used to identify hazards to human health. We claim that while the activity of enzymes may vary among individuals and among species, this does not indicate that critical events in disease processes occurring after exposure to hazardous agents differ qualitatively between animal models and humans.... For the most part, differences in how laboratory animals and humans metabolize environmental agents, or in the interactions of these agents with molecular targets (e.g., DNA, enzymes, or nuclear receptors), are quantitative in nature [27].
This is very much a Newtonian way of thinking and it ignores the effects of biological evolution and the fact that animals are complex systems.
Toxicologists have known for a long time that species differences may be quantitative or qualitative [53,57]. Consider a model substrate such as phenol. Humans and rats excrete phenol through two pathways, sulfate conjugation and glucuronic acid conjugation. There is a quantitative difference between humans and rats since the ratios of sulfate conjugation to glucuronic acid conjugation are different in each species. But there are qualitative differences too. Cats are incapable of glucuronic acid conjugation, and must excrete phenol via the sulfate pathway. For pigs the reverse is true, they cannot use the sulfate pathway, and must rely on glucuronic acid conjugation. (It is worth noting that there are at least seven metabolic pathways that are unique to primates – for example the aromatization of quinic acid [57].)
One lesson to be drawn from this example is that even if the same function is achieved by two species (e.g., excretion of phenol), it does not follow that they are doing so by the exact same underlying causal mechanisms. In the context of toxicology or pharmacology, these mechanistic differences can be important in assessing safety as well as pharmacological utility.
Other voices
We are not the only ones concerned about the predictive power of animal models. The scientific community itself is not marching in lock step when it comes to the predictive utility of animal models. We will take a moment to examine what some of these scientists actually say about the power of animal models to predict human responses. The following quotes from scientists (and the above quotes from Leavitt and Sankar), of course, prove nothing in the sense of mathematical proof, they nevertheless provide a window into the thinking of people well versed in the field and as such a reasonable person should take them seriously. They should give pause to those who think that the prediction issue is one where there is no reasonable controversy.
R.J. Wall and M. Shani observe:
The vast majority of animals used as models are used in biomedical preclinical trials. The predictive value of those animal studies is carefully monitored, thus providing an ideal dataset for evaluating the efficacy of animal models. On average, the extrapolated results from studies using tens of millions of animals fail to accurately predict human responses ... We conclude that it is probably safer to use animal models to develop speculations, rather than using them to extrapolate [58].
Curry points out:
The failure, in the clinic, of at least fourteen potential neuroprotective agents expected to aid in recovery from stroke, after studies in animal models had predicted that they would be successful, is examined in relation to principles of extrapolation of data from animals to humans [59].
The above proves two things. 1. At least some members of the animal experimentation community do know what the word predict means. 2. They also know animal models are not predictive. Their analysis and conclusions, which revealed the failure of animal models, was neither new nor surprising. History reveals the same.
Discrepancies between animal-human studies and even animal-animal studies date back centuries. Percival Pott showed coal tar was carcinogenic to humans in 1776. Yamagiwa and Ichikawa showed it was carcinogenic in some animals in 1915. But even then, rabbits did not respond as mice [60]. In 1980 there were roughly sixteen-hundred known chemicals that caused cancer in mice and rodents, but only approximately fifteen known to cause cancer in humans [61]. The Council on Scientific Affairs publishing in the Journal of the American Medical Association in 1981 stated:
The Council's consultants agree that to identify carcinogenicity in animal tests does not per se predict either risk or outcome in human experience ... the Council is concerned about the hundreds of millions of dollars that are spent each year (both in the public and private sectors) for the carcinogenicity testing of chemical substances. The concern is particularly grave in view of the questionable scientific value of the tests when used to predict human experience [62]. (Emphasis added.)
David Salsburg of Pfizer wrote in 1983 that a report by the National Cancer Institute that examined 170 chemicals concluded that lifetime feeding studies using rodents lacked sensitivity and specificity. He stated:
If we restrict attention to long term feeding studies with mice or rats, only seven of the 19 human non-inhalation carcinogens (36.8%) have been shown to cause cancer. If we consider long term feeding or inhalation studies and examine all 26, only 12 (46.2%) have been shown to cause cancer in rats or mice after chronic exposure by feeding or inhalation. Thus the lifetime feeding study in mice and rats appears to have less than a 50% probability of finding known human carcinogens. On the basis of probability theory, we would have been better off to toss a coin [63]. (Emphasis added.)
Should we discard every drug that causes cancer in animals? Acetaminophen, chloramphenicol, and metronidazole are known carcinogens in some animal species [64,65]. Phenobarbital and isoniazid are carcinogens in rodents[60,66,67]. Does this mean they never should have been released to the market? Diphenylhydantoin (phenytoin) is carcinogenic to humans but not rats and mice [68-70]. Occupational exposure to 2-naphthylamine appears to cause bladder cancer in humans. Dogs and monkeys also suffer bladder cancer if exposed to 2-naphthylamine orally and mice suffer from hepatomas. It does not appear to have carcinogenic properties in rats and rabbits. These are qualitative differences due to differences in metabolism of aromatic amines [71]. It also appears that fewer genetic, epigenetic, or gene expression events are needed to induce cancer in rodents than are needed to induce cancer in humans [72-74]. (A good review of species differences in relation to carcinogenesis and why they exist is Anisimov et al. [72].)
Intraspecies differences also exist. Clofibrate, nafenopin, phenobarbital, and reserpine cause cancer in old but not young rats [68,75].
Should the above drugs that caused cancer in some animal have been banned? If the null hypothesis is that there is no association between animal carcinogens and human carcinogens strong enough so the animal model can be said to be predictive, then we see much evidence to support the null hypothesis but very little if any to refute it.
The point to be made here is that there are scientists (rather more than we have space to mention) who question the predictive and/or clinical value of animal-based research and history is on their side. As noted above, the opinions of scientists prove nothing in and of itself. Further, some of what we have presented could be dismissed as anecdotes but this would be a mistake. First, the studies referenced in the previous section are just that, scientific studies not anecdotes. Second, the examples presented are referenced, anecdotes are not (unless they are case reports and we must remember that thalidomide's downfall started as a series of case reports). But again we must ask where the burden of proof lies? We believe the second law of thermodynamics because there has never been an example of where it was wrong. Just one such example would falsify the law. If the animal model community claims the animal model is predictive, then they must explain the examples and studies that reveal it was not. Examples such as case reports count when disproving an assertion, especially when they are then supported with studies, but cannot be used by those making the claim as proof for their overarching hypothesis. That is how science works. We did not make the rules. In summary there is ample evidence to question, if not disprove entirely, the overarching hypothesis that animal models are predictive for humans.
To take the argument one step further, we must ask what conditions ought to be satisfied if animals are to serve as predictors of human biomedical phenomena. This is a question concerning theory and come under the heading of philosophy of science.
Theory
The requirements that need to be satisfied to get genuine causal predictions (as opposed to mere correlations) about members of one species on the basis of test results on members of another species are very difficult to satisfy (and may at best only be approximated in a few cases).
Models or a modality claiming predictability assumes identical causal properties. As researchers Carroll and Overmier explain in their recent book Animal Research and Human Health [76], and as LaFollette and Shanks also do in Brute Science[3], animals in biomedical research are frequently used as causal analogical models (CAMs). If the heart pumps blood in a chimpanzee, then we reason by analogy it will pump blood in humans also. If fen-phen is safe for the hearts of animals we reason by analogy it will be safe for human hearts as well [77]. Carroll and Overmier state:
When the experimenter devises challenges to the animal and studies a causal chain that, through analogy, can be seen to parallel the challenges to humans, the experimenter is using an animal model [76].
These are examples of using animals as CAMs or predictive models according to the traditionally used definition of the word prediction and as used by us in this article. We will discuss CAMs more fully in the section on theory.
Animal models in this sense involve causal analogical reasoning. First, what is a causal analogical model (CAM) and how does it involve causal analogical reasoning? The first condition that must be met in order for a thing to be considered a CAM is this: "X (the model) is similar to Y (the object being modelled) in respects {a...e}." If "X has additional property f, then while f has not been observed directly in Y, likely Y also has property f [3]." This latter claim is something that needs to be tested. In the present case it means the gathering of human data.
This first condition is not enough. For instance chimpanzees and humans have (a) an immune system, (b) have 99% of their DNA in common with humans, (c) contract viruses, etc. HIV reproduces very slowly in chimpanzees. We therefore expect HIV to reproduce slowly in humans. [3]. So if HIV replicates slowly in chimpanzees, animal experimenters reason by analogy that it will do the same in humans. This turns out to be false.
CAMs must satisfy two further conditions: (1) the common properties (a, ..., e) must be causal properties which (2) are causally connected with the property (f) we wish to project – specifically, (f) should stand as the cause(s) or effect(s) of the features (a, ..., e) in the model. When animals are used as causal analogical models the reasoning process taking us from results in the model to the system modelled is called causal analogical reasoning [3].
But it is not enough simply to point to similarities to justify cross-species extrapolation in the context of causal analogical reasoning. In complex, interactive systems such as organisms, we need to know whether there are relevant causal differences, i.e., causal disanalogies (with respect to mechanisms and pathways) that compromise the usefulness of the analogical reasoning. In other words, for a CAM to be predictive, there should be no causally-relevant disanalogies between the model and the thing being modeled. For example, there must be no properties {g, h, i} unique to either the model or the object modelled that causally interact with the common properties {a...e}, since such properties will likely compromise the predictive utility of the model.
The idea here is an old one. It concerns causal determinism – a concept that has played a fundamental role in the development of modern science. Causal determinism rests on two basic principles: (1) The Principle of Causality, according to which all events have causes; and (2) The Principle of Uniformity, according to which, for qualitatively identical system, all other things being equal, same cause is always followed by same effect.
These ideas played a role in our earlier discussion of Newtonian mechanics at the beginning of this essay. In a way, the whole issue of prediction comes down to the principle of uniformity. Are the animals used to make predictions about humans qualitatively identical to humans once allowances have been made for difference in body weight or surface area? No reasonable person who understands evolutionary biology, and who knows, for example, that rodents and humans have taken very different evolutionary trajectories since the lineages leading to modern humans and rodents, respectively, diverged over seventy million years ago, will expect qualitative identity. But perhaps qualitative identity is an ideal that can only be approximated. Are humans and their animal models sufficiently similar for approximate predictions to be made? The numerous studies referenced above, say no. Why is this the case?
Vertebrates are evolved complex systems. Such systems may manifest different responses to the same stimuli due to: (1) differences with respect to genes/alleles present; (2) differences with respect to mutations in the same gene (where one species has an ortholog of a gene found in another); (3) differences with respect to proteins and protein activity; (4) differences with respect to gene regulation; (5) differences in gene expression; (6) differences in protein-protein interactions; (7) differences in genetic networks; (8) differences with respect to organismal organization (humans and rats may be intact systems, but may be differently intact); (9) differences in environmental exposures; and last but not least; (10) differences with respect to evolutionary histories. These are some of the important reasons why members of one species often respond differently to drugs and toxins, and experience different diseases. These ten facts alone would be sufficient for some to conclude that animal models cannot be predictive for human; that transspecies extrapolation is impossible vis-à-vis drug response and disease research especially when analyzed in lights of the standards society today demands. (And the standards not set too high. If you think they are ask yourself if, had you taken rofecoxib and been harmed, would you have accepted a .99 PPV as acceptable?)
In biomedicine we do not have the mathematician's luxury of modeling humans and rodents by beginning, "let humans and rodents be spheres." If only it were that simple. Instead, what we do have are a lot of theoretical grounds for questioning the predictive utility of animal models. But of course, such theoretical reasoning may be dismissed as being just that. The real question is one of evidence. We have examined the evidence against and found it compelling but we should now examine the evidence cited as supporting the predictive nature of animals. We will now turn our attention to the famous Olson study, which many suppose to have settled the matters at hand firmly on the side of animal model being predictive for humans.
The Famous Olson Study
The Olson study [78] purports (and has certainly been cited in this regard) to provide evidence of the vast predictive utility of animal models in assessing human toxicity. In response to an article by Shanks et al. [79] Conn and Parker quoted the Olson study stating:
The authors have simply overlooked the classic study (Olson, Harry, et al.., 2000. "Concordance of the Toxicity of Pharmaceuticals in Humans and in Animals." Regul Toxicol Pharmacol 32, 56–67) that summarizes the results from 12 international pharmaceutical companies on the predictivity of animal tests in human toxicity. While the study is not perfect, the overall conclusion from 150 compounds and 221 human toxicity events was that animal testing has significant predictive power to detect most – but not all – areas of human toxicity [80]. (Emphasis theirs.)
We encourage the reader to examine the Olson Study in its entirety. Here we include some important representative paragraphs from the Olson study and our commentary will follow. We apologize for the length of the quote but due to the importance many place on the paper, we believe a thorough examination is justified.
This report summarizes the results of a multinational pharmaceutical company survey and the outcome of an International Life Sciences Institute (ILSI) Workshop (April 1999), which served to better understand concordance of the toxicity of pharmaceuticals observed in humans with that observed in experimental animals. The Workshop included representatives from academia, the multinational pharmaceutical industry, and international regulatory scientists.The main aim of this project was to examine the strengths and weaknesses of animal studies to predict human toxicity (HT). The database was developed from a survey which covered only those compounds where HTs were identified during clinical development of new pharmaceuticals, determining whether animal toxicity studies identified concordant target organ toxicities in humans ...
The results showed the true positive HT concordance rate of 71% for rodent and nonrodent species, with nonrodents alone being predictive for 63% of HTs and rodents alone for 43%. The highest incidence of overall concordance was seen in hematological, gastrointestinal, and cardiovascular HTs, and the least was seen in cutaneous HT. Where animal models, in one or more species, identified concordant HT, 94% were first observed in studies of 1 month or less in duration. These survey results support the value of in vivo toxicology studies to predict for many significant HTs associated with pharmaceuticals and have helped to identify HT categories that may benefit from improved methods ...
The primary objective was to examine how well toxicities seen in preclinical animal studies would predict actual human toxicities for a number of specific target organs using a database of existing information ...
Although a considerable effort was made to collect data that would enable a direct comparison of animal and human toxicity, it was recognized from the outset that the data could not answer completely the question of how well animal studies predict overall the responses of humans. To achieve this would require information on all four boxes in Fig. 1, and this was not practicable at this stage. The magnitude of the data collection effort that this would require was considered impractical at this stage. The present analysis is a first step, in which data have been collected pertaining only to the left column of Fig. 1: true positives and false negatives. [See ​figure 3.] By definition, therefore the database only contains compounds studied in humans (and not on those that never reached humans because they were considered too toxic in animals or were withdrawn for reasons unrelated to toxicity). Despite this limitation, it was deemed useful to proceed in the expectation that any conclusions that emerged would address some of the key questions and focus attention on some of the strengths and weaknesses of animal studies ...
Tumblr media
Figure 3. Olsen figure 1.
A working party of clinicians from participating companies developed criteria for "significant" HTs to be included in the analysis. For inclusion a HT (a) had to be responsible for termination of development, (b) had to have resulted in a limitation of the dosage, (c) had to have required drug level monitoring and perhaps dose adjustment, or (d) had to have restricted the target patient population. The HT threshold of severity could be modulated by the compound's therapeutic class (e.g., anticancer vs anti-inflammatory drugs). In this way, the myriad of lesser "side effects" that always accompany new drug development but are not sufficient to restrict development were excluded. The judgments of the contributing industrial clinicians were final as to the validity of including a compound. The clinical trial phase when the HT was first detected and whether HT was considered to be pharmacology-related was recorded. HTs were categorized by organ system and detailed symptoms according to standard nomenclature (COSTART, National Technical Information Service, 1999) ...
Concordance by one or more species: Overall and by HT. Overall, the true positive concordance rate (sensitivity) was 70% for one or more preclinical animal model species (either in safety pharmacology or in safety toxicology) showing target organ toxicity in the same organ system as the HT ​[Fig.44].
This study did not attempt to assess the predictability of preclinical experimental data to humans. What it evaluated was the concordance between adverse findings in clinical data with data which had been generated in experimental animals (preclinical toxicology) [78]. (Emphasis added.)
Tumblr media
Figure 4. Olsen figure 3.
Tumblr media
Figure 5. Google Results
The Olson Study, as noted above, has been employed by researchers to justify claims about the predictive utility of animal models. However we think there is much less here than meets the eye. Here's why:
The study was primarily conducted and published by the pharmaceutical industry. This does not, in and of itself, invalidate the study. However, one should never lose sight of the fact that the study was put together by parties with a vested interest in the outcome. If this was the only concern, perhaps it could be ignored, however, as we will now show, there are some rather more serious flaws.
The study says at the outset that it is aimed at measuring the predictive reliability of animal models. Later the authors concede that their methods are not, as a matter of fact, up to this task. This makes us wonder how many of those who cite the study have actually read it in its entirety.
The authors of the study invented new statistical terminology to describe the results. The crucial term here is "true positive concordance rate" which sounds similar to "true predictive value" (which is what should have been measured, but was not). A Google search on "true positive concordance rate" yielded twelve results (counting repeats), all of which referred to the Olson Study (see figure ​figure5).5). At least seven of the twelve Google hits qualified the term "true positive concordance rate" with the term "sensitivity" – a well-known statistical concept. In effect, these two terms are synonyms. Presumably the authors of the study must have known that "sensitivity" does not measure "true predictive value." In addition you would need information on "specificity" and so on, to nail down this latter quantity. If all the Olson Study measured was sensitivity, its conclusions are largely irrelevant to the great prediction debate.
Any animals giving the same response as a human was counted as a positive result. So if six species were tested and one of the six mimicked humans that was counted as a positive. The Olson Study was concerned primarily not with prediction, but with retroactive simulation of antecedently know human results.
Only drugs in clinical trials were studied. Many drugs tested do not actually get that far because they fail in animal studies.
"...the myriad of lesser "side effects" that always accompany new drug development but are not sufficient to restrict development were excluded." A lesser side effect is one that affects someone else. While hepatotoxicity is a major side effect, lesser side effects (which actually matter to patients) concern profound nausea, tinnitus, pleuritis, headaches and so forth. We are also left wondering whether there was any independent scientific validity for the criteria used to divide side effects into major side effects and lesser side effects.
Even if all the data is good – and it may well be – sensitivity (i.e. true positive concordance rate) of 70% does not settle the prediction question. Sensitivity is not synonymous with prediction and even if a 70% positive prediction value rate is assumed, when predicting human response 70% is inadequate. In carcinogenicity studies, the sensitivity using rodents may well be 100%, the specificity, however, is another story. That is the reason rodents cannot be said to predict human outcomes in that particular biomedical context.
The Olson Study is certainly interesting, but even in its own terms it does not support the notion that animal models are predictive for humans. We think it should be cited with caution. A citation search (also performed with Google on 7/23/08) led us to 114 citations for the Olson paper. We question whether caution is being used in all these citations.
Conclusion
Mark Kac stated, "A proof is that which convinces a reasonable man." Even though the burden of proof is not on us to prove animal models are not predictive, we believe we have presented a proof that would convince a reasonable man.
There are both quantitative and qualitative differences between species. This is not surprising considering our current level of knowledge vis-à-vis evo devo, gene regulation and expression, epigenetics, complexity theory, and comparative genomics. Hypotheses generate predictions, which can be then proven true or false. Predict has a very distinct meaning in science and according to some is the foundation of science itself. Prediction does not mean retrospectively finding one animal that responded to stimuli like humans and therefore saying that the animal predicted human response nor does it mean cherry picking data nor does it mean occasionally getting the right answer.
When a concept such as "Animal models can predict human response" is accepted as true, it is not functioning as a hypothesis. We have referred to this as an overarching hypothesis but could have easily referred to it as an unfounded assumption. An assumption or overarching hypothesis might in fact be true but its truth must be proven. If a modality such as animal testing or using animals to predict pathophysiology in human disease is said to be a predictive modality, then any data generated from said modality should have a very high probability of being true in humans. Animal models of disease and drug response fail this criterion.
In medicine, even positive predictive values of .99 may be inadequate for some tests and animal models do not even roughly approximate that. Therefore, animal models are not predictors of human response. Some animals do occasionally respond to stimuli as do humans. However, how are we to know prospectively which animal will mimic humans? Advocates who maintain animals are predictive confuse sensitivity with prediction. Animals as a group are extremely sensitive for carcinogenicity or other biological phenomena. Test one hundred different strains or species and one is very likely to react like humans. But the specificity is very low; likewise the positive and negative predictive values. (Even if science did decide to abandon the historically correct use of the word predict, every time an animal-model advocate said animal species × predicted human response Y, she would also have to admit that animal species A, B, C, D, E and so forth predicted incorrectly. Thus justifying the use of animals because animal models per se to make our drug supply safer or predict facts about human disease would not be true.)
Some have suggested we should not criticize animal models unless we have better suggestions for research and testing [27]. It is not incumbent upon us to postpone criticizing animal models as not being predictive until predictive models such as in silico, in vitro or in vivo are available. Nor is it incumbent upon us to invent such modalities. Astrology is not predictive for foretelling the future therefore we criticize such use even though we have no notion of how to go about inventing such a future-telling device.
Some have also suggested that animal models may someday be predictive and that we should so acknowledge. While this is true in the sense that anything is possible it seems very unlikely, as genetically modified organisms have been seen to suffer the same prediction problems we have addressed [16,81-87] and, as mentioned different humans have very different responses to drugs and disease. Considering our current understanding of complex systems and evolution it would be surprising if one species could be used to predict outcomes in another at the fine-grained level where our study of disease and drug response is today and to the accuracy that society demands from medical science.
There are direct and indirect consequences to this misunderstanding of what prediction means. If we did not allow on the market any chemical or drug that causes cancer, or is teratogenic, or causes severe side effects in any species, then we would have no chemicals or drugs at all. Furthermore, there is a cost to keeping otherwise good chemicals off the market. We lose: treatments perhaps even cures; the income that could have been generated; and new knowledge that could have been gained from learning more about the chemical. These are not insignificant downsides. Since we now understand vis-à-vis personalized medicine that even humans differ in their response to drugs and disease and hence one human cannot predict what a drug will do to another human, it seems illogical to find models that are predictive using completely different species from humans. If we truly want predictive tests and research methods (and we do), it would seem logical to start looking intraspecies not interspecies.
References
D, Pielke RA Jr: Prediction in Science and Policy. In Prediction: Science, Decision Making, and the Future of Nature Edited by: Sarewitz D, Pielke RA Jr, Byerly R Jr. Island Press; 2000:11-22.
Quine W: Quiddities" An Intermittently Philosophical Dictionary Cambridge: The Belknap Press of Harvard University Press; 2005.
LaFollette H, Shanks N: Brute Science: Dilemmas of animal experimentation London and New York: Routledge; 1996.
Greek R, Greek J: Specious Science New York: Continuum Int; 2002.
Xceleron [http://www.xceleron.com/metadot/index.pl]
Altman L: Who Goes First? The Story of Self-Experimentation in Medicine University of California Press; 1998.
Bruder CE, Piotrowski A, Gijsbers AA, Andersson R, Erickson S, de Stahl TD, Menzel U, Sandgren J, von Tell D, Poplawski A, Crowley M, Crasto C, Partridge EC, Tiwari H, Allison DB, Komorowski J, van Ommen GJ, Boomsma DI, Pedersen NL, den Dunnen JT, Wirdefeldt K, Dumanski JP: Phenotypically concordant and discordant monozygotic twins display different DNA copy-number-variation profiles. Am J Hum Genet 2008, 82:763-771.
Fraga MF, Ballestar E, Paz MF, Ropero S, Setien F, Ballestar ML, Heine- Suner D, Cigudosa JC, Urioste M, Benitez J, Boix-Chornet M, Sanchez-Aguilera A, Ling C, Carlsson E, Poulsen P, Vaag A, Stephan Z, Spector TD, Wu YZ, Plass C, Esteller M: Epigenetic differences arise during the lifetime of monozygotic twins. Proc Natl Acad Sci USA 2005, 102:10604-10609.
Weiss ST, McLeod HL, Flockhart DA, Dolan ME, Benowitz NL, John- son JA, Ratain MJ, Giacomini KM: Creating and evaluating genetic tests predictive of drug response. Nat Rev Drug Discov 2008, 7:568-574.
Kaiser J: Gender in the pharmacy: does it matter? Science 2005, 308:1572.
Willyard C: Blue's clues. Nat Med 2007, 13:1272-1273.
Couzin J: Cancer research. Probing the roots of race and cancer. Science 2007, 315:592-594.
Holden C: Sex and the suffering brain. Science 2005, 308:1574. 14.
Salmon W: Rational Prediction. Philosophy of Science 1998:433-444.
Butcher EC: Can cell systems biology rescue drug discovery? Nat Rev Drug Discov 2005, 4:461-467.
Horrobin DF: Modern biomedical research: an internally self-consistent universe with little contact with medical reality? Nat Rev Drug Discov 2003, 2:151-154.
Pound P, Ebrahim S, Sandercock P, Bracken MB, Roberts I: Where is the evidence that animal research benefits humans? BMJ 2004, 328:514-517.
Editorial: The time is now. Nat Rev Drug Discov 2005, 4:613.
Littman BH, Williams SA: The ultimate model organism: progress in experimental medicine. Nat Rev Drug Discov 2005, 4:631-638.
Uehling M: Microdoses of Excitement over AMS, 'Phase 0' Trials. Bio-IT World 2006, 2006:.
Dixit R, Boelsterli U: Healthy animals and animal models of human disease(s) in safety assessment of human pharmaceu- ticals, including therapeutic antibodies. Drug Discovery Today 2007, 12:336-342.
Greek R, Greek J: Sacred Cows and Golden Geese: The Human Cost of Experiments on Animals New York: Continuum Int; 2000.
Greek J, Greek R: What Will We Do if We Don't Experiment on Animals. Continuum 2004.
FDA Issues Advice to Make Earliest Stages Of Clinical Drug Development More Efficient [http://www.fda.gov/bbs/topics/ news/2006/NEW01296.html]
Knight A: The beginning of the end for chimpanzee experiments? Philos Ethics Humanit Med 2008, 3:16.
Shanks N, Pyles RA: Evolution and medicine: the long reach of "Dr. Darwin". Philos Ethics Humanit Med 2007, 2:4.
Vineis P, Melnick R: A Darwinian perspective: right premises, questionable conclusion. A commentary on Niall Shanks and Rebecca Pyles' "evolution and medicine: the long reach of "Dr. Darwin"". Philos Ethics Humanit Med 2008, 3:6.
28. Debate title: Animals are predictive for humans [http:video.google.com/videoplay?docid=-8464924004908818871&q=mad ison+debate+animal&total=5&start=0&num=30&so =0&type=search&plindex=0]
Gad S: Preface. In Animal Models in Toxicology Edited by: Gad S. CRC Press; 2007:1-18.
Fomchenko EI, Holland EC: Mouse models of brain tumors and their applications in preclinical trials. Clin Cancer Res 2006, 12:5288-5297.
Hau J: Animal Models. In Handbook of Laboratory Animal Science Animal Models Volume II. 2nd edition. Edited by: Hau J, van Hoosier GK Jr. CRC Press; 2003:2-8.
Staff: Of Mice...and Humans. Drug Discovery and Development 2008, 11:16-20.
FDA panel recommends continued use of controversial diabetes drug [http://www.cnn.com/HEALTH/9903/26/rezulin.review.02/index.html]
Masubuchi Y: Metabolic and non-metabolic factors determining troglitazone hepatotoxicity: a review. Drug Metab Pharmacokinet 2006, 21:347-356.
Topol EJ: Failing the public health – rofecoxib, Merck, and the FDA. N Engl J Med 2004, 351:1707-1709.
Heywood R: Clinical Toxicity – Could it have been predicted? Post-marketing experience. Animal Toxicity Studies: Their Rele- vance for Man 1990:57-67.
Spriet-Pourra C, Auriche M: Drug Withdrawal from Sale. New York 2nd edition. 1994.
Igarashi T: The duration of toxicity studies required to support repeated dosing in clinical investigation – A toxicologists opinion. In CMR Workshop: The Timing of Toxicological Studies to Sup- port Clinical Trials Edited by: Parkinson CNM, Lumley C, Walker SR. Boston/UK: Kluwer; 1994:67-74.
Sankar U: The Delicate Toxicity Balance in Drug Discovery. The Scientist 2005, 19:32.
Wilbourn J, Haroun L, Heseltine E, Kaldor J, Partensky C, Vainio H:
Response of experimental animals to human carcinogens: an analysis based upon the IARC Monographs programme. Car- cinogenesis 1986, 7:1853-1863.
Rall DP: Laboratory animal tests and human cancer. Drug Metab Rev 2000, 32:119-128.
Tomatis L, Aitio A, Wilbourn J, Shuker L: Human carcinogens so far identified. Jpn J Cancer Res 1989, 80:795-807.
Knight A, Bailey J, Balcombe J: Animal carcinogenicity studies: 1. Poor human predictivity. Altern Lab Anim 2006, 34:19-27.
Tomatis L, Wilbourn L: Evaluation of carcinogenic risk to humans: the experience of IARC. In New Frontiers in Cancer Cau- sation Edited by: Iversen. Washington, DC: Taylor and Francis; 2003:371-387.
IARC: IARC Monographs on the Evaluation of Carcinogenic Risks to Humans Lyon: IARC; 1972.
IARC monographs programme on the evaluation of carcino- genic risks to humans [http://monographs.iarc.fr]
Haseman JK: Using the NTP database to assess the value of rodent carcinogenicity studies for determining human can- cer risk. Drug Metab Rev 2000, 32:169-186.
Gold LS, Slone TH, Ames BN: What do animal cancer tests tell us about human cancer risk?: Overview of analyses of the carcinogenic potency database. Drug Metab Rev 1998, 30:359-404.
Freeman M, St Johnston D: Wherefore DMM? Disease Models & Mechanisms 2008, 1:6-7.
Schardein J: Drugs as Teratogens CRC Press; 1976.
Schardein J: Chemically Induced Birth Defects. Marcel Dekker 1985.
Manson J, Wise D: Teratogens. Casarett and Doull's Toxicology 4th edition. 1993:228.
Caldwell J: Comparative Aspects of Detoxification in Mammals. In Enzymatic Basis of Detoxification Volume 1. Edited by: Jakoby W. New York: Academic Press; 1980.
Runner MN: Comparative pharmacology in relation to teratogenesis. Fed Proc 1967, 26:1131-1136.
Keller SJ, Smith MK: Animal virus screens for potential teratogens. I. Poxvirus morphogenesis. Teratog Carcinog Mutagen 1982, 2:361-374.
Staples RE, Holtkamp DE: Effects of Parental Thalidomide Treatment on Gestation and Fetal Development. Exp Mol Pathol 1963, 26:81-106.
Caldwell J: Problems and opportunities in toxicity testing arising from species differences in xenobiotic metabolism. Toxicol Lett 1992, 64–65(Spec No):651-659.
Wall RJ, Shani M: Are animal models as good as we think? Theriogenology 2008, 69:2-9.
Curry SH: Why have so many drugs with stellar results in laboratory stroke models failed in clinical trials? A theory based on allometric relationships. Ann N Y Acad Sci 2003, 993:69-74. discussion 79–81
Shubick P: Statement of the Problem. In Human Epidemiology and Animal Laboratory Correlations in Chemical Carcinogenesis Edited by: Coulston F, Shubick P. Ablex Pub; 1980:5-17.
Coulston F: Final Discussion. In Human Epidemiology and Animal Laboratory Correlations in Chemical Carcinogenesis Edited by: Coulston F, Shubick P. Ablex; 1980:407.
Council_on_Scientific_Affairs: Carcinogen regulation. JAMA 1981, 246:253-256.
Salsburg D: The lifetime feeding study in mice and rats – an examination of its validity as a bioassay for human carcinogens. Fundam Appl Toxicol 1983, 3:63-67.
IARC: IARC Working group on the evaluation of carcinogenic risks to humans. Lyon 1972, 1–78:.
Sloan DA, Fleiszer DM, Richards GK, Murray D, Brown RA: Increased incidence of experimental colon cancer associated with long-term metronidazole therapy. Am J Surg 1983, 145:66-70.
Clemmensen J, Hjalgrim-Jensen S: On the absence of carcinogenicity to man of phenobarbital. In Human Epidemiology and Animal Laboratory Correlations in Chemical Carcinogenesis Edited by: Alex Pub. Coulston F, Shubick S; 1980:251-265.
Clayson D: The carcinogenic action of drugs in man and animals. In Human Epidemiology and Animal Laboratory Correlations in Chemical Carcinogenesis Edited by: Coulston F, Shubick P. Ablex Pub; 1980:185-195.
Anisimov V: Carcinogenesis and Aging Boca Rotan: CRC Press; 1987. 69. Anisimov V: Molecular and Physiological Mechanisms of Aging St Peters- burg: Nauka; 2003.
Dilman VM, Anisimov VN: Effect of treatment with phenformin, diphenylhydantoin or L-dopa on life span and tumour incidence in C3H/Sn mice. Gerontology 1980, 26:241-246.
IARC: Some aromatic amines, hydrazine and related sub- stances, n-nitroso compounds and miscellaneous alkylating agents. IARC monograph on the evaluation of carcinogenic risks to humans, Lyon 1974, 4:.
Anisimov VN, Ukraintseva SV, Yashin AI: Cancer in rodents: does it tell us about cancer in humans? Nat Rev Cancer 2005, 5:807-819.
Hahn WC, Weinberg RA: Modelling the molecular circuitry of cancer. Nat Rev Cancer 2002, 2:331-341.
Rangarajan A, Weinberg RA: Opinion: Comparative biology of mouse versus human cells: modelling human cancer in mice. Nat Rev Cancer 2003, 3:952-959.
Anisimov VN: Age as a risk in multistage carcinogenesis. In Comprehensive Geriatric Oncology 2nd edition. Edited by: Balducci L, Ershler WB, Lyman GH. M E: Informa Healthcare. Taylor and Francis group; 2004:75-101. 157–178
Overmier JB, Carroll ME: Basic Issues in the Use of Animals in Health Research. In Animal Research and Human Health Edited by: Carroll ME, Overmier JB. American Psychological Association; 2001:5.
Kolata G: 2 Top Diet Drugs Are Recalled Amid Reports of Heart Defects. New York Times. New York 1997.
Olson H, Betton G, Robinson D, Thomas K, Monro A, Kolaja G, Lilly P, Sanders J, Sipes G, Bracken W, Dorato M, Van Deun K, Smith P, Berger B, Heller A: Concordance of the toxicity of pharmaceuticals in humans and in animals. Regul Toxicol Pharmacol 2000, 32:56-67.
 Shanks N, Greek R, Nobis N, Greek J: Animals and Medicine: Do Animal Experiments Predict Human Response? Skeptic 2007, 13:44-51.
Conn P, Parker J: Letter. Animal research wars. Skeptic 2007, 13:18-19.
Van Regenmortel MH: Reductionism and complexity in molecular biology. Scientists now have the tools to unravel biological and overcome the limitations of reductionism. EMBO Rep 2004, 5:1016-1020.
Morange M: A successful form for reductionism. The Biochemist 2001, 23:37-39.
Morange M: The misunderstood gene Cambridge: Harvard University Press; 2001.
8Mepham TB, Combes RD, Balls M, Barbieri O, Blokhuis HJ, Costa P, Crilly RE, de Cock Buning T, Delpire VC, O'Hare MJ, Houdebine LM, van Kreijl CF, Meer M van der, Reinhardt CA, Wolf E, van Zeller AM: The Use of Transgenic Animals in the European Union: The Report and Recommendations of ECVAM Workshop 28. Altern Lab Anim 1998, 
Liu Z, Maas K, Aune TM: Comparison of differentially expressed genes in T lymphocytes between human autoimmune disease and murine models of autoimmune disease. Clin Immunol 2004, 112:225-230.
Dennis C: Cancer: off by a whisker. Nature 2006, 442:739-741. 87. Houdebine LM: Transgenic animal models in biomedical research. Methods Mol Biol 2007, 360:163-202.
1 note · View note
theculturedmarxist · 4 years
Link
If someone were to tell you that major and influential business sectors like the fossil fuel and health insurance industries simply don’t exist, or imply that major corporations like ExxonMobil and Cigna don’t try to manipulate public opinion and advance a political agenda in order to protect and maximize their profits, you might find it hard to contain your laughter.
But looking at corporate media’s coverage of corporate media, one gets the sense that anyone who dares to suggest that media corporations like Comcast-owned MSNBC, AT&T-owned CNN or News Corp–owned Fox News have their own commercial interests—which incentivize them to push pro-corporate politics—are kooky “conspiracy theorists.”
That’s really strange. After all, there are plenty of reports from corporate media discussing how major oil and health insurance companies spend fortunes to propagandize Americans into believing that a single-payer healthcare system would be disastrous, or that the climate crisis really isn’t that serious, despite all evidence to the contrary (FAIR.org, 1/24/20, 1/31/20). There are whistleblowers like former Cigna PR executive Wendell Potter who revealed how he, along with other paid corporate propagandists, cultivated “contacts and relationships among journalists and other media gatekeepers,” and learned from the tobacco industry’s “groundbreaking work in stealth PR” in order to develop talking points and advance a political agenda to protect industry profits.
So why exactly should we trust for-profit media outlets to be impartial and have their news coverage untainted by their own business interests?
Throughout the 2020 election cycle, FAIR (7/17/19, 8/21/19, 1/30/20, 4/7/20) has documented how corporate media have been trying to play kingmaker by aggressively pushing centrist and right-wing Democratic presidential candidates like Joe Biden onto the electorate, while assailing progressives like Bernie Sanders as “unelectable.” Now that Sanders has dropped out of the race, it’s worth examining the role propagandistic and hostile media coverage played throughout the primary in determining the outcome.
Analyzing the paradoxical phenomenon of the sizable “Socialists for Biden” voting bloc, FAIR’s founder Jeff Cohen (Common Dreams, 3/16/20) noted that although continuous exit polls confirm that most voters agree with Sanders ideologically, many nevertheless voted for Biden, because they perceived him to be a more “electable” candidate against Donald Trump.
Although several people have debunked the myth of “low” youth voter turnout in this election cycle (FAIR.org, 2/26/20; Films for Action, 3/5/20; Atlantic, 3/17/20), it’s true that older voters turned out in massive numbers to support Biden. On Twitter (3/14/20), journalist Malaika Jabali attributed the “generational divide” in voting behavior to an “information divide,” and argued that many older voters don’t suffer from a lack of information, so much as too much information from different sources compared to younger voters.
That influential media outlets like CNN, MSNBC, NPR, the New York Times and Washington Post continue to exercise a formidable class-control function on behalf of their owners and advertisers seems to be borne out by data confirming Jabali’s analysis.
Pew Research (12/10/18) found that although social media has become a more popular source for news,  television still retains supremacy, with 49% of US adults receiving news most often from TV. Whereas young adults aged 18 to 29 receive 36% of their news from social media and 16% from TV, older voters aged 50–64 receive 65% of their news from TV and only 14% from social media, and voters older than 65 receive a whopping 81% of their news from TV and a mere 8% from social media.
Pew (9/26/19) also documented a striking partisan divide on Americans’ trust in the media, with 69% of Democrats having a “great” or “fair” amount of trust in the media, compared to only 15% of Republicans.
Other media studies of cable news like CNN and MSNBC confirmed their pivotal role as an anti-Sanders attack machine (FAIR.org, 1/30/20). According to the Norman Lear Center (5/19), self-identified liberals watch MSNBC at three times the rate of moderates and ten times the rate of conservative viewers. Branko Marcetic (In These Times, 11/13/19) documented that MSNBC’s August–September 2019 coverage of the Democratic primary not only emphasized electability over policy issues, but also talked about Biden three times as often as Sanders, who had fewer negative mentions (11%) compared to Sanders (21%). Another survey by In These Times (3/9/20) of CNN’s coverage of the 24 hours after Sanders and Biden’s massive wins in Nevada and South Carolina found that Sanders received three times more negative coverage than Biden, despite winning by similar margins.
Given Sanders’ massive advantages over Biden when it came to campaign staff and volunteers, organizational and online presence, ad buys as well as money in Super Tuesday states, it’s clear that the media blitz following Biden’s South Carolina win played a decisive role in propelling him to victory in states he didn’t even campaign in (New York Times, 2/26/20).
Yet, in what is truly a collective galaxy-brain level take, corporate media appeared to deny their own existence and how the profit motive compromised their coverage throughout the primary.
Whenever corporate media discuss themselves, they frequently use scare quotes around the term “corporate media” (e.g., Washington Post, 10/24/19; Politico, 8/13/19), as if the term is referring to a nonexistent entity or a figment of their audience’s imagination. This is in stark contrast to their alarmist attitude towards foreign state media outlets like RT and Xinhua, which are frequently referred to as “propaganda” and “state media”—no quotation marks required (New York Times, 3/8/17, 2/18/20).
Yet when they weren’t suggesting they were imaginary, corporate media were also fully capable on occasion of discussing their enormous impact on the race. Vanity Fair’s “Joe Biden, Revenant, Was an Irresistible Media Story—and It Helped Win Him Super Tuesday” (3/5/20) described how Biden campaign aides were gloating to CNN about riding their “earned-media tsunami” to victory in Super Tuesday—referring to coverage that wasn’t paid for following Biden’s South Carolina win—and estimated to be worth at least $72 million during those crucial days.
Despite noting that Sanders actually had more free coverage ($156 million) during this time period from the same “‘corporate media’” which had “written him off” earlier, Vanity Fair argued that media narratives trump any other factor (including money), with Sanders’ narrative being largely negative in contrast to Biden’s:
In recent days Biden has basked in mostly positive coverage, with TV pundits citing his South Carolina victory in arriving at a consensus narrative: Biden, despite poor showings in all of the early-primary states, is the comeback candidate peaking at the perfect moment…. Following Biden’s Saturday blowout, the media narrative shifted from Sanders being the momentum candidate to questions about whether his campaign was constrained by a ceiling due to his poor South Carolina performance, particularly with black voters, the most consistent Democratic voting bloc.
Corporate media frequently noted how Sanders has been their most frequent critic when he was on the campaign trail, and even when they grudgingly admitted its validity at times, they treated Sanders’ media criticism as an ideological perspective on the media, rather an uncontroversial description. Politico (8/13/19) wrote that “Sanders has long accused the ‘corporate media’ of putting the interests of the elite above those of the majority of Americans.” Vanity Fair (2/18/20) wrote: “Sanders has long contended that the agenda of ‘corporate media’ doesn’t necessarily reflect the people’s needs, and his 2020 campaign has doubled as a rolling media criticism shop.” The New York Times (3/5/20) also gaslit readers by attributing critique of the “‘corporate media,’” and MSNBC’s hosts for pushing an “‘establishment’” perspective, merely to Sanders and the “activist left,” as if their critique were only a sectarian complaint.
The Washington Post’s media critic Erik Wemple (10/24/19) mocked Sanders’ critique of the “‘corporate media,’” implying that Sanders hasn’t “done enough research” to “tease out tendencies,” despite writing that “attacking the ‘corporate media’ is good politics for Sanders, and his critiques sometimes land with heft and reason.” Post media columnist Margaret Sullivan (2/12/20) glancingly acknowledged media hostility towards Sanders when she observed that Sanders kept dissing “what he calls the corporate media,” and that his “ardent followers bond with him and with one another by despising the mainstream media, often enough with good reason”—yet she failed to explain this hostility in structural terms regarding media ownership and commercial interests. In the Post’s “Bernie Sanders’s Bogus Media Beef,” Aaron Blake (8/14/19) cited executive editor Marty Baron dismissing Sanders’ claims as a “conspiracy theory,” while the Post’s Paul Waldman (8/14/19) dismissed Sanders’ media criticism as “something in common with pretty much every candidate,” and breathtakingly asserted that “ideological bias is usually the least important.”
Waldman’s assessment isn’t shared by FAIR (Extra!, 10/89), or by Politico’s founding editor John Harris (11/7/19), who admitted that “the pervasive force shaping coverage of Washington and elections is what might be thought of as centrist bias, flowing from reporters and sources alike.”
Another approach to dismissing structural media criticism has been to portray Sanders and Trump’s media criticisms as equally wacky conspiracy theories (FAIR.org, 1/24/20). CNN’s Chris Cillizza (8/13/19) asserted that Sanders’ critique of the Jeff Bezos–owned Washington Post’s coverage is “absolutely no different than what Trump does.” Politico’s John Harris (2/13/20) bemoaned the “dilution of mainstream media’s institutional power” and pined for the days where editors at “major news institutions possessed enormous power” to “summon sustained national attention on subjects they deemed important” with their story selection and framing, while denouncing Sanders for following the “Trump precedent” in “taking flight from public accountability.”
When corporate media didn’t dismiss their bias against him, they sunnily described how Sanders didn’t seem to need fairer coverage from corporate media—and cable news in particular—because nonprofit media outlets, with considerably less resources and reach, are increasingly picking up the slack. Citing the “formidable” influence of “alternative media,” the Los Angeles Times (12/12/19) argued that “coverage in what Sanders likes to disparage as the ‘corporate media’ may matter less to him than to any of his rivals because of the benefit he derives from a surging alternative media ecosystem.” The New Republic (2/12/20, 2/28/20) acknowledged MSNBC’s hostile posture towards Sanders, yet also failed to explain that bias in terms of corporate interests, while arguing that Sanders’ campaign strategy of relying on an alternative media infrastructure to run “against the ‘corporate media’” and “withstand attacks from mainstream networks” has “worked wonders.”
Strikingly, in all these reports, corporate media either misrepresented Sanders’ proposed solutions to corporate media bias or omitted them altogether. Vermont journalist Paul Heintz (Washington Post, 2/26/19), for example, chided Sanders for not understanding what a “free press” does, and claimed that Sanders’ remedy for corporate media is merely “uncritical, stenographic coverage of his agenda.”
In fact, Sanders’ op-ed in the Columbia Journalism Review (8/26/19) echoed many of FAIR’s criticisms of corporate media and proposed solutions:
Today, after decades of consolidation and deregulation, just a small handful of companies control almost everything you watch, read and download. Given that reality, we should not want even more of the free press to be put under the control of a handful of corporations and “benevolent” billionaires who can use their media empires to punish their critics and shield themselves from scrutiny….
In my administration, we are going to institute an immediate moratorium on approving mergers of major media corporations until we can better understand the true effect these transactions have on our democracy…. We must also explore new ways to empower media organizations to collectively bargain with these tech monopolies, and we should consider taxing targeted ads and using the revenue to fund nonprofit civic-minded media.
Setting aside the interlocking commercial interests mass media corporations share with other industries and advertisers funding their coverage (FAIR.org, 8/1/17), just as one can expect the healthcare and fossil fuel industries to launch propaganda campaigns to protect their profits (Intercept, 11/20/18; Guardian, 10/23/19), one can reliably predict these same media corporations to oppose any political agenda that harms their own profitability. Given Sanders’ opposition to future mergers and corporate consolidation of mass media giants, proposals to wield antitrust legislation against Google and Facebook, and levying new taxes to fund nonprofit media outlets, is it any surprise that for-profit news sources opposed his candidacy (Politico, 8/28/19)?
Perhaps future media criticism might sound less “conspiratorial” if we simply referred to outlets like MSNBC as “Comcast,” CNN as “AT&T” and the Washington Post as “Jeff Bezos” instead. When one understands corporate media as an industry in themselves, decisions to have a centrist bias to maximize profits by appealing to liberals and conservatives alike, or creating “information silos” to sell the news as a commodity to target demographics, make a lot more sense. And when we understand the news industry as a top-down institution, beholden to stockholders like all other corporations, we can stop blaming journalists for bad coverage, and start blaming executives like Les Moonves and Tony Maddox for doing things like gifting billions of dollars in free coverage for Trump (FAIR.org, 3/31/20, 4/13/20).
Then maybe claiming that corporate media outlets like MSNBC and CNN are hostile to left-wing political agendas will be considered just as obvious as saying that ExxonMobil and Cigna are opposed to climate action and universal healthcare.
8 notes · View notes
kartusestuff-blog · 4 years
Text
Purchasing Generic Ink Cartridges
Tumblr media
These days, printers are getting progressively less expensive to purchase. It was possibly back a couple of years prior when the least complex printer, by the present standard would cost a huge number of dollars. How circumstances are different. Today printers are a lot less expensive and it isn't phenomenal to locate a decent quality printer for ~$50. As a rule these printers are more than fit for giving all the highlights a home client would ever request. So for what reason are producers doing this? There is more cash in ink!
You may now think yourself "How might they get more cash-flow from a $15 ink cartridge than a $300 printer?" Let's gander at a model. You purchase your printer ($300) and the maker puts 5% edge ($15) in his pocket. Presently you do a tad of printing at home, so suppose you print 60 pages for each week. Presently your cartridge has an existence of 2000 pages meaning like clockwork or a little more than 7 months you will require new ink cartridges. So you go down to your neighborhood shop and get 5 ink cartridges, in light of the fact that your printer takes 5 cartridges one after another, which costs you $75. A similar maker makes a 40% markup on his ink cartridges. So he makes $30 like clockwork from you. Presently over the life of the printer 3.5 years he can hope to make about $150 from you.
The point of this article is to give you, the customer, a decent critique on the two sides of the story and the realities about conventional ink cartridges. So we should begin with what we think about conventional ink cartridges. Nonexclusive ink cartridges went ahead the scene a little more than five years back. These early adopters would take a fundamental ink cartridge spout and structure an ink tank that would fit the printer by making a basic plastic form from the first ink cartridge. The print quality was normal, best case scenario yet it opened purchasers eyes to how much a cartridge could be worked for, with limits of 80% of the first sum. Read more kartuše
Today, nonexclusive cartridges have progressed significantly. Albeit numerous providers despite everything stock very dodgy Indian and Chinese fabricated ink cartridges many case comparative printing results from their nonexclusive ink cartridge. Obviously, you do need to look around to discover to best conventional ink cartridge provider for you printer.
Another strategy for diminishing the expense of ink cartridges is to top off them. There are two different ways of topping off your ink cartridges. The clench hand is to change your printer and fit bigger ink tanks. You at that point associate these bigger tanks outside of your printer with the littler ones inside. These bigger ink cartridges can be filled at home with a conventional ink. The other strategy is to get your ink cartridges topped off at a shop. It appears these shops are springing up on the side of each business region. You acquire your ink cartridge and after a short hold up your cartridge has been topped off. Albeit most will just top off your cartridge twice yet more on that later.
So which one is better? I figured you would ask that so how about we analyze a few upsides and downsides of each.
Topping off Ink Cartridges
Topping off is, as I would see it, the most exceedingly awful approach to get a good deal on ink cartridges. The most significant thing to recall is that organizations that offer top off ink utilize a similar ink for each cartridge. So it doesn't make a difference if your ink cartridge is from Canon or from HP they will utilize a similar ink to fill both. What's up with that? All things considered, every maker utilizes an alternate spout size for showering ink. These spouts are profoundly delicate and require the right consistency of ink or they produce blockages, blotches and a wide range of different nasties. Another issue is most refillers won't top off conventional cartridges. This is on the grounds that they don't have the foggiest idea about the nature of the cartridge and would prefer not to know whether the cartridges quit working. This implies despite the fact that you get you cartridge topped off you will in any case need to purchase a unique ink cartridge after two tops off.
Nonexclusive Ink Cartridges.
Nonexclusive ink cartridges are a superior choice than tops off on the grounds that the genuine ink quality is commonly better coordinated to the printer and they can tailor an ink to the model of the printer. Be that as it may, in my experience these ink cartridges can be hit and miss and you must be set up for additional blockages and issues than you would involvement in the first makers ink cartridges. Albeit a few people depend on conventional ink cartridges, I would just suggest them for high contrast printing where quality isn't as significant. I wouldn't suggest them for significant archives, for example, your resume.
Authentic ink cartridges
Authentic ink cartridges made by the printer producer as still the best way to go when quality is significant, for example, computerized photographs and so forth. In any case, maybe the most significant issue is with warrantee claims. On the off chance that your printer is under warrantee you ought not utilize anything aside from authentic ink cartridges. All printer makers will tear up your warrantee in your face on the off chance that they can demonstrate you have been utilizing non real ink cartridges. Numerous producers are currently including "chips" in the rear of ink cartridges so the printer will just work with unique ink cartridges, while others have programming that can distinguish non authentic parts. As I would see it that is against serious yet that is for the courts to choose.
1 note · View note
katiemcg97-blog · 5 years
Text
Readerly Exploration 12/4/19 Hard Words: Why aren’t kids learning to read? Top 5 reasons to Eliminate Guided Reading
For this readerly exploration I first read, Hard words: Why aren’t kids learning to read? The main idea of this paper was that educators need to understand the science behind literacy instruction. They need to be able to know the components that make up a balanced literacy approach, and how to encompass phonics instruction within this as well, to benefit the holistic reader.
One of the major stand outs that was surprising to me was the misuse of the term “balanced literacy approach” from the district teachers and principals. The paper stated regarding the teachers’ thoughts on balanced literacy that, “The main idea seemed to be: Give kids lots of good books, and with some guidance and enough practice, they become readers. "We never looked at brain research," she said. "We had never, ever looked at it. Never”. In our literacy course we have been inundated by talking of the balanced literacy approach. The balanced literacy approach is on a continuum comprised with the most time in teacher supported settings where students are exposed to modeled literacy practices like teacher models, mini lessons, interactive read aloud. This is followed with activities where the teacher slowly releases students to continue in group reading activities (shared reading, small group work), and ultimately independent work such as independent reading and assessment. The components for reading within the balanced literacy approach follows read aloud/modeled reading, shared reading, guided reading, independent reading. This misinterpretation from the teachers of the Bethlehem for what the balanced approach is, has evidently significantly impacted their students’ performance. The reading opportunities they have provided for their students is mainly independent practice with minimal modeling. Although everyone (for the most part) can learn to read, some never reach their full potential due to a skewed presentation of the balanced literacy approach. This is due to a lack of professional teaching on the teacher’s part. Students need ample opportunities to be able to observe, practice in group settings, then apply the concepts independently in all courses, but especially literacy.
The next article I read was Top Five Reasons to Eliminate Guided Reading. The main idea of this blog post was that guided reading negatively affects students and their readerly identities. I had several issues with this blog post from spelling errors to topics weighed down with opinions rather than data. My interesting nugget for this article was that “guided reading teaches kids to hate books”. Well that may be true for some students, it all goes down to how guided reading is presented, and how it prepares students to be let loose and explore books independently. Guided Reading to me is the training wheels to prepare students to ride on their own. I believe for guided reading books although it may be tough to do so, authentic texts should be used to gauge student interest and entice them. I agree if guided reading is only done with a worksheet, it would be boring. However, I’d recommend this author to read any text of Debbie Miller and see how using her guided reading approach may change the author’s mind as to how guided reading is boring or inhibits learning.
 For my readerly habit I decided to focus on the first reading Hard words, and Read texts deeply to interpret, critique, and analyze the various layers of meaning a text might offer a reader. The task I decided to complete was Learn something about the author of the assigned course reading(s) and use that to draw conclusions about the motivation behind the reading or the credibility/quality of the writing. Before I began my exploration of the author, I reflected on the key components of the article itself and the various topics it discussed. This article reflected upon the science of literacy, and asked the question what does it mean to learn to read? how do we teach students to read? and why is there a disconnect between what literacy is within our classrooms and the data to promote successful readers? I then made a prediction of what I thought this author may be like or what her educational background or career may be like. I understood this author was well versed in the ability to convey the ideas of the balanced literacy approach and identifying specific conflicts educators have had with literacy instruction. These conflicts included explicit phonics instruction, amount of independent reading, and how educators balanced authentic texts with the basics of decoding words. I assumed that she would have a degree in education, and possibly had/has a career as a literacy coach or reading specialist. I began my search by simply doing a google search and typed in Emily Hanford. The first link that popped up said that Emily Hanford is  the Senior Education Correspondent working at APM reports. She has worked as a reporter, news editor, program host, and many more. Her work has appeared in countless journals and newspapers. Additionally, she had won an award for Excellence in Media Reporting on Education Research Award, reflecting on her ability to convey educational research to the public. This page only stated that she attended Amherst college, but did not reflect on her major of study or other educational background. On LinkedIn, I found that she had majored in English and earned her B.A. Her experience had led her to a career as an acting news director, and senior editor in Chicago then in North Carolina Public Radio. Her other work consists of similar topics as to the one I read today including phonics education, schools failing dyslexia, and other educational articles combatting the inadequacies of literacy education today. After sifting through these sources, I came to see how Emily Hanford’s collegiate background in English could have been an asset for her to report on educational topics, and additionally prepared her to report on heavy topics such as fallacies of the education system. However, I wonder if there were other moments in the author’s life that led her to further identify the factors of the reading programs within schools that are failing? If she has children how involved is she within their own literacy program? And why report on educational issues? As a reporter there are so many interesting social justice issues that I personally would have wanted to delve into. What exactly called her to advocate for the education system without a professional educational background in college? All these questions aside, due to Emily Hanford’s prominence and affluence within the journalism world and educational sphere she has proven herself as a highly commendable figure. After investigating the author’s background and reflecting on the process of understanding what may have drawn the author to write this article, I realize that there are sometimes pieces to the puzzle we may never know what compelled someone to write a certain way. I can’t help but to wonder what that tipping point was that allowed her to advocate so much for the literacy programs that attempt to educate young minds today. This readerly exploration helped me to see how important credibility of an author’s career experience can be in authenticating what the article conveys to its audience.
https://www.linkedin.com/in/emily-hanford-a0171223/
https://www.apmreports.org/emily-hanford
Tumblr media
This is a screenshot from my author search to gather further information about Emily Hanford.
2 notes · View notes
bodhirook1138 · 5 years
Text
The Problems with Aladdin: Orientalism, Casting, and Ramadan
Tumblr media
Originally posted on Medium.
Edward Said and Jack G. Shaheen did not do the work they did so that movies like Aladdin would still get made.
I say this as someone who has had a complicated relationship with the 1992 Aladdin animated feature. I loved it when I was a kid. For a long time, it was my favorite Disney cartoon. I remember proudly telling white friends and classmates in third grade that Aladdin was “about my people.” Although nothing is said in the movie about Aladdin’s religion, I read him as Muslim.
When I grew older, I read Jack G. Shaheen’s book, Reel Bad Arabs, which analyzes about 1,000 American films that vilify and stereotype Arabs and Muslims. Among these films is Aladdin, which Shaheen reportedly walked out of. Shaheen spoke out against lyrics in the film’s opening song: “I come from a land from a far-away place/Where they cut off your ear if they don’t like your face/It’s barbaric, but hey, it’s home.” Although he convinced Disney to remove the lyrics for the home video release, the final verse was still there: “It’s barbaric, but hey, it’s home.” As a 1993 op-ed in The New York Timeswrote, “It’s Racist, But Hey, It’s Disney.”
In Edward Said’s seminal book, Orientalism (1978), he described orientalism as a process in which the West constructs Eastern societies as exotic, backwards, and inferior. According to Said, orientalism’s otherization of Arabs, Muslims, and Islam provided justification for European colonialism and Western intervention in the Middle East and Muslim-majority countries, often under the pretext of rescuing the people — especially Muslim women — from themselves. In addition to orientalism’s practices of constructing the “Orient” as the West’s “Other,” Said asserted that another major facet of orientalism involves a “western style for dominating, restructuring, and having authority over the ‘Orient.’” In other words, it is not the Arab or Muslim who gets to define themselves, but rather the West does.
There are plenty of excellent and detailed critiques out there about how the original Aladdin is filled with racist, sexist, and orientalist tropes, so there’s very little, if anything, to say that already hasn’t been said. In her extensive report, “Haqq and Hollywood: Illuminating 100 Years of Muslim Tropes And How to Transform Them,” Dr. Maytha Alhassen argues that Hollywood’s legacy of depicting Arabs and Muslims as offensive caricatures is continued in Aladdin, where the main characters like Aladdin and Jasmine are “whitewashed, with anglicized versions of Arabic names and Western European (though brown-skinned) facial features” and speak with white American accents. Alhassen notes the contrast with the “villains, Jafar, and the palace guards” who are depicted as “darker, swarthy, with undereye circles, hooked noses, black beards, and pronounced Arabic and British accents.” In another article, “The Problem with ‘Aladdin,’” Aditi Natasha Kini asserts that Aladdin is “a misogynist, xenophobic white fantasy,” in which Jasmine is sexualized and subjected to tropes of “white feminism as written by white dudes.” Not only does Jasmine have limited agency in the film, Kini writes, but her role in the film is “entirely dependent on the men around her.”
When Disney announced plans to produce a live-action remake of Aladdin, I learned through conversations that the Aladdin story is not even in the original text for Alf Layla wa Layla, or One Thousand and One Nights. It was later added by an 18th century French translator, Antoine Galland, who heard the story from a Syrian Maronite storyteller, Hanna Diyab. Galland did not even give credit to Diyab in his translation. Beyond the counter-argument that “the original Aladdin took place in China,” I am left wondering, how much of the original tale do we really know? How much did Galland change? It’s possible that Galland changed the story so significantly that everything we know about Aladdin is mostly a western, orientalist fabrication. For a more detailed account about the origins of the Aladdin tale, I recommend reading Arafat A. Razzaque’s article, “Who ‘wrote’ Aladdin? The Forgotten Syrian Storyteller.”
Disney has been boasting about how the live-action Aladdin is one of the “most diverse” movies in Hollywood, but this is an attempt to hide the fact that the casting of this film relied on racist logic: “All brown people are the same.” It’s great that an Egyptian-Canadian actor, Mena Massoud, was cast in the lead role, but there’s inconsistency elsewhere: Jasmine is played by British actress Naomi Scott, who is half Indian and half white; Jafar is played by Dutch-Tunisian actor Marwan Kenzari; and Jasmine’s father and a new character, Dalia, are played by Iranian-American actors Navid Negahban and Nasim Pedrad, respectively. The casting demonstrates that the filmmakers don’t know the differences between Arabs, Iranians, and South Asians. We are all conflated as “one and the same,” as usual.
Then there’s the casting of Will Smith as the genie. Whether deliberate or not, reinforced here is the Magical Negro trope. According to blogger Modern Hermeneut, this term was popularized by Spike Lee in 2011 and refers to “a spiritually attuned black character who is eager to help fulfill the destiny of a white protagonist.” Moreover, the author writes that Lee saw the Magical Negro as “a cleaned up version of the ‘happy slave’ stereotype, with black actors cast as simpleminded angels and saints.” Examples of the Magical Negro can be found in films like What Dreams May Come, City of Angels, Kazaam (which also features a Black genie), The Green Mile, The Adjustment Bureau, and The Legend of Bagger Vance. In the case of Aladdin, the genie’s purpose is to serve the protagonist’s dreams and ambitions. While Aladdin is Arab, not white, the racial dynamic is still problematic as the Magical Negro trope can be perpetuated by non-Black people of color as well.
I need to pause for a moment to explain that I don’t believe an Aladdin movie should only consist of Arab actors. Yes, Agrabah is a fictional Arab country, but it would be perfectly fine to have non-Arabs like Iranians, South Asians, and Africans in the movie as well. That’s not the issue I have with the casting, and this is not about identity politics. My problem is that the filmmakers saw Middle Eastern and South Asian people as interchangeable rather than setting out to explore complex racial, ethnic, and power dynamics that would arise from having ethnically diverse characters existing within an Arab-majority society. Evelyn Alsultany, an Associate Professor who was consulted for the film, states in her post that one of the ways Disney tried to justify casting a non-Arab actress for Jasmine was by mentioning that her mother was born “in another land.” However, this seems to have been Disney doing damage control after they received some backlash about Jasmine’s casting. The result is convenient erasure of an Arab woman character. Moreover, the change in Jasmine’s ethnicity does little, if anything, to reduce the film’s problematic amalgamation of Middle Eastern and South Asian cultures. Alsultany writes that “audiences today will be as hard pressed as those in 1992 — or 1922, for that matter — to identify any distinct Middle Eastern cultures beyond that of an overgeneralized ‘East,’” where “belly dancing and Bollywood dancing, turbans and keffiyehs, Iranian and Arab accents all appear in the film interchangeably.”
Other examples of how the film conflates various Middle Eastern and South Asian cultures is highlighted in Roxana Hadadi’s review: “Terms like ‘Sultan’ and ‘Vizier’ can be traced to the Ottoman Empire, but the movie also uses the term ‘Shah,’ which is Iranian monarchy.” Referring to the dance scenes and clothing, she writes they are “mostly influenced by Indian designs and Bollywood styles” while “the military armor looks like leftovers from Ridley Scott’s Kingdom of Heaven.” An intersectional approach to the diverse ethnic communities represented in the film would have made for a more nuanced narrative, but this would have required a better director.
Speaking of the director, it is amazing that, of all people, Disney hired Guy Ritchie. Because if there is any director out there who understands the importance of representation and knows how to author a nuanced narrative about Middle Eastern characters living in a fictitious Arab country, it’s… Guy Ritchie? Despite all of the issues regarding the origin of the Aladdin story, I still believed the narrative could have been reclaimed in a really empowering way, but that could not happen with someone like Guy Ritchie. It’s textbook orientalism to have a white man control the narrative. I would have preferred socially and politically conscious Middle Eastern and Muslim writers/directors to make this narrative their own. Instead, we are left with an orientalist fantasy that looks like an exoticized fusion of how a white man perceives South Asia and the Middle East.
Lastly, I have to comment on how this movie was released during the Islamic holy month of Ramadan. In fact, the film’s release date, May 24th, was just one day before the last ten days of Ramadan, which are considered to be the most important in the month. During Ramadan, Muslims around the world fast — if they are able to — from dawn to sunset every day for 30 days. The time when we break our fast, iftar, typically involves dinner and prayer with family, friends, and/or the community. But Ramadan is more than just about fasting, it’s a time of self-reflection, compassion, and strengthening our connection with Allah, our loved ones, and community. I don’t believe Disney released Aladdin during Ramadan intentionally. If anything, I think the film’s release date is reflective of how clueless and ignorant Disney is. It’s so ridiculous that it’s laughable.
I don’t want to give the impression that Muslims don’t go out to the movies during Ramadan. Of course there are Muslims who do. I just know a lot who don’t— some for religious reasons and some, like myself, for no other reason than simply not having enough time between iftar and the pre-dawn meal, sehri (I mean, I could go during the day, but who wants to watch a movie hungry, right?). Even Islamophobic Bollywood knows to release blockbuster movies on Eid, not towards the end of Ramadan.
But this isn’t about judging Muslim religiosity during the holy month. No one is “less” of a Muslim if they are going to the movie theater or anywhere else on Ramadan. My point is that Disney has not shown any consideration for the Muslim community with this movie. They did not even consider how releasing the film during Ramadan would isolate some of the Muslim audience. It’s clear that Disney did not make efforts to engage the Muslim community. Of course, there is nothing surprising about this. But you cannot brag about diversity when you’re not even engaging a group of people that represents the majority of the population you claim to be celebrating! In response to Shaheen’s critiques of the original Aladdin cartoon, a Disney distribution president at the time said Aladdin is “not just for Arabs, but for everybody.” But this is a typical dismissive tactic used to gloss over the real issues. No doubt Disney will follow the same script when people criticize the latest film.
I don’t have any interest in this movie because it failed to learn anything from the criticism it received back in 1992. The fact that a 1993 op-ed piece titled, “It’s Racist, But Hey, It’s Disney” is still relevant to the live-action version of a film that came out 27 years ago is both upsetting and sad at the same time. As I said earlier, Edward Said and Jack Shaheen did not exhaustively speak out against orientalism, exoticism, and vilification to only see them reproduced over and over again. Of course Disney refused to educate themselves and listen to people like Shaheen— their Aladdin story was never meant for us.
20 notes · View notes
venusdiet33-blog · 5 years
Text
Is 2015 Venus Factor By John Barban A Scam?
Venus Factor is absolutely a computerized excess fat lessening technique in fatburningfactor blog. It consists of several eBooks (PDFs), video clips and audio recordings, and a local community community forum only for its members only. I ordered The Venus Factor Approach from your established web site about 3 weeks earlier and also immediately shortly after I started following it often, its effects really began to reveal them selves after per month. Well just before beginning the Venus System, I considered related to 146 lbs. Within three months of utilization, I've dropped 20 kilos, therefore it is similar to an amazing success! You are in a position to see the alterations that I've had inside the photos below.
For almost any lady to reduce excess weight, she has got to implement the really very same matter an individual basically does - use-up significantly more unhealthy calories than she consumes. That appears to be easy, but it will acquire many show wonderful benefits to adapt your way of life as well as figure out on these alterations day easily pursuing day. Well before you as well as all your family members get started, consider a lot of time to learn out about the fundamental principles of body fat decreasing and precisely what it takes to view closing final results. The fact is, you without doubt are boosted off of to contemplate a manage every evening hours well before you basically even start exploring for extra fat decrease help on the web primarily mainly because you will have to obtain the appropriate quantity of exercise in order to shed pounds. Moreover, these weight loss plans generally provide related to a basic body weight-decrease, having said that that body weight will normally can come back mainly because most individuals should try to eat various other variables to make amends for inadequate energy .
Rethinking success is an component of Freedhoff's pitch as well. He publishes articles, Just what I'm getting at could possibly be the reality I feel what is likely to make keeping bodyweight-reduction appear almost impossible” might be the goal articles modern society has typically set up to evaluate achievement. Unquestionably, once the goal set up is reducing every single last oz of excess weight that several mindless graph affirms you are actually anticipated to burn off then this descriptor practically impossible” may be honest. In the several other fingers, when the purpose is usually to enhance the most healthy life-time that you and your family can truthfully get pleasure from, subtotal deficits, typically with substantial concomitant health enhancements, are within your attain. ”
We've looked at the aspects why the precise magic formula objective why unwanted recommendations of one's Venus Factor are relatively difficult to acquire - the system truly does display wonderful benefits, 1000s of women have signed up with this strategy. There might be components that may not attract pretty much totally anyone such as layed out earlier mentioned but basic, right after you furthermore to your members of the family are sensible and understand that function should be developed if you furthermore to your members of the family are likely to become successful, then there's no objective by any means for not offering the Venus Factor a check out out.
Exercise frequency, cardio exercise also as stretching out q & a using the venus factor system. Truly does the venus factor diet program show amazing outcomes. Free of charge cost-free the venus factor critiques does it demonstrate really good benefits,has any individual tried out the venus factor yoli testimonials does it demonstrate exceptional ultimate final results,venus factor testimonials rip-off. The venus factor is produced to boost women metabolism enhancing the eye-catching heart and soul within them. So upcoming clientele which may be doubtful about venus factor as turning into a great fiscal expenditure don't have trigger to pressure, they are capable of retrieve the settlement need to they re unhappy together alongside the rewards. ,the venus factor system testimonies: diet and moreover exercise technique to reduce weight for women - scams - usually do not purchase.
Suitable Extra fat decrease Diet Software For Females Fat burning With Garcinia Cambogia Reviews Pure Garcinia Cambogia Boca Raton Florida Garcinia Cambogia As well as Green Leaf Tea. Reduce weight the nutritious way. News analyzed many inside the most popular diet plans for harmless and efficient bodyweight lowering. The very best diet for shedding excess weight. Information & Community Record has ranked the Mayo Medical center Diet the #1 Greatest Business Diet Plan. Body Excess fat Loss Diet Program points out exactly just how to.
A balanced diet chart aids you truly regulate the amount you ingest. By specifying your routines furthermore to calorie consumption through the day, a list provides you really a great idea of what you have to try to eat in becoming on track. You might also support a diet graph or chart for excess weight-decrease supplied by best exercises to shed pounds fast or from your web, although an person-specific diet graph or chart will be the best wager, especially when you actually use a issue.
The Venus Factor Review,The Venus Factor fraud Correct shortly after i revealed to I would personally recreate John Barban's The Venus Factor Review you truly is not going to be capable to submit a direct hyperlink a place. Nevertheless,there will not be all of the stipulation with regards to putting up pictures. Instead of create a complete sales page endorsing your item also as publishing your hyperlink only to acquire it taken away inside a not several time, try out working with Web coding utilizing the snapshot, so you may put in your affiliate computer code. Bear in mind, craigslist can't continue by way of what's all on your own image, so you can pull off a lttle bit considerably more. That's it! Now fundamentally stay back as well as wait to get a cash to begin streaming in. Craigslist is definitely the really very same as whatever else, you actually will need to decide simply the position piece will demonstrate closing final results very best obtainable specifically where, determine your industry position. Several within the customers on craigslist are attempting to find out bargains and also steals, and also often even free gifts. The client basic is not really continually well prepared to dedicate straight resources, consequently the charge-free of charge Commit For every Steer path functions definitely properly.
youtube
1 note · View note
The Reduced Down on Technique Portion Uncovered
New Post has been published on https://www.cholixi.vn/the-reduced-down-on-technique-portion-uncovered/
The Reduced Down on Technique Portion Uncovered
Why Almost Anything You’ve Learned About Strategy Department Is Incorrect
Notion mapping will complete the task okay notably when attempting prepare a lucid and thoughtful essay on the alternatively intricate field. Technique step in dissertation could service decide the process on the analyzing to get rid of for those that aren’t familiarized for it. It infers more than simply the strategies you expect make use of to gather important information.
Data files obtaining practices may perhaps do my essay be special. Solution pieces is a wind to write as they have a tendency to end up being to some extent identical in construction and format.
The Concept of Methodology Part
It is usually hard for a person or an future researcher to write a earning proposition only just because they shortage sensation. For anyone who is now working on your technique, or you’re instructing other types about prime to achieve this, don’t wait to leave a feedback down below. In addition, a consumer could very well seek the writer to send in an element of the project for summary and, when necessary, consult her or him to earn improvements.
The idea chart will allow you for more information regarding the human relationships with a mixture of diversified recommendations. Along with the specialized help of our trusted work you can expect to enhance your articles and marks which probably will make your living less complicated! The great thing is you’ve managed to buy a findings chapter dissertation from us, as it’s the utmost obnoxiously boring piece of it.
An amazing researcher will mean you can find out in https://repositories.lib.utexas.edu/bitstream/handle/2152/60577/2013-X-0807-RascoeElder.txt?sequence=2 respect to the constraints of her or his studies. Any future journalist and novelist is actually in the position to inform you the value and the value of an editor to the complete procedure of site content production. The reader really isn’t 100 % distinct how much the research is, and consequently they browse the literature evaluate wanting to get a hold of lots more understanding of what’s increasingly being looked into.
Using decimals might just be small time-the consumption of, thus, make use of compatible volumes will be able to support you pick a border inside which you are able to figure out your answer. It’s you can in order to get hold of qualified professional guidance in the following and achieve top-quality benefits. Because of this, you’ll should try to expand only one in the last consequence.
Locating the optimal Methodology Part
The issue is it’s generally uninteresting current information that’s unnoticed by all except for the sector click as well as rightly so regularly. There are various methods that have been employed in the software program advancement observe. As early as the computing device software program is developed in small sized batches, alters can conveniently be announced on to the piece.
Enroll to remain notified in the event that the education tutorials is available! Understand it to create a worthy proposal, you always really have to establish certain requirements ahead most colleges refuse to just accept proposals which don’t abide by the mandatory format. Also, make sure you cite each individual provider, even when you’ve referenced simply paragraph from the exact same.
The Nuiances of Technique Segment
The method has to be ultimately involved with the setting up piece. The amount of time you’ve committed at a job opportunity is probably going to be one of the governing things which assists you opt for which format would go along with your circumstances significantly better. The blueprint is eventually examined.
Greatest Choices of Technique Segment
The money necessary for an essay will depend on the quantity of energy the writer will have to push. Giving the best value of aspect is necessary, except you also ought rationalize your options. Additionally, a scrupulous editor also normally requires an help of the expert grammar check out website which delivers lots of corrections and crafting ways.
Give samples of the brand new and artistic tips and hints which you have widely used to perform the responsibility. It is really not certainly shameful to ask for assist to since you may often shed more. In the minute part of the query, it makes sense to remain humble and share an affirmative respond to.
Distributing leaflets at assorted aspects on the urban center along with several different locations is besides that a very nice option. In addition, last second modifications can impact the calibre of try to a marvelous degree. Your means should have an obvious reference to your research point.
At precisely the same time, more often than not there’s no proof included in the spiral model type, rendering it difficult to maintain a a record of your entire approach. At the moment, you could possibly acquire professional newspaper help out and manage to do all the work. The call to react to consumer needs is of abnormal benefits, although the transformations were created much soon after by the reality phase.
Methodology Section Options
As a researcher, you’re usually believed to design for certain the research and ideas never injure an individual in the slightest degree. A dissertation croping and editing service is designed to match the regulations of Ph.D. individuals who want to complete their dissertation. You may need to create a research papers as an effective area of your capstone challenge.
Regardless whether you’re at school or college or university, we’re ever present to support you. The treatment for authoring analyze offer doesn’t vary determined by its amount. A study proposition is often a laconic and knowledge-strong summary of the investigation you plan to carry out.
An incredible researcher will always assist you to find out more to the limits of his or her studies. Any driven creator and novelist will likely be capable of inform you the worthiness and the significance of an editor at the whole entire procedure of material building. Your reader continually isn’t completely positive specifically what the scientific studies are, and thereby they check the literature inspection attempting obtain very much more insight into what’s truly being reviewed.
The Most Well-liked Technique Page
Basically, see to it that your techniques will truly undertake the difficulty. Subsequently, it’s equally important to search across the features and constraints within your exploration. Your collection of methods should really be relevant to the theoretical platform defined earlier on.
The Undetected Information About Strategy Location
Producing an essay, or some other papers for this purpose mean much, isn’t just writing articles. Amongst the total most complicated servings of your pieces of paper is the literature evaluation. Generally discussing, you are unable to chopped and paste from an earlier on papers.
Files acquiring methods could possibly be distinct. In scientific studies assessments and proposals, the machine are a few things that you prepare all by yourself.
Obtaining the Excellent Strategy Portion
Thankfully, creating an excellent literature review report isn’t as daunting as it can certainly sound, and by right after a variety of tips and approaching the critique in reasonable pieces, you may make a really good goods which bolsters the top exceptional and believability with the record for almost any complete. Process mapping seems to be a somewhat easy agreement of basics and ideas to a hierarchical structure, you will have to form it with plenty of good care seeing that a terrific principle map going through clearly-decided backlinking key phrases is stylishly complicated and has profound message. The master plan is eventually covered.
The Meaning of Method Location
The issue is it’s sometimes uninteresting news that’s neglected by all except for the industry media and also appropriately so in general. Also, it’s crucial that you explain the particular exploration methods of important information range you’re quite likely to use, if they are fundamental or second info library. Since the desktop application is created in small sized batches, enhancements can readily be unveiled inside the thing.
A regular technique calls for a string of continuous levels contained in the process managers method. It’s worthwhile, even though, to be closely along at the advised format, as it’ll offer you a a feeling of what you need to provide, and how you’re asked for to give it. You will need to supply or get rid of one or two other sectors in keeping with your wants and preferences.
If it’s vital to produce an individual else’s advent, you would want to explore on similar particular person for any bit, and start off with his or her achievements or vocation shows. Presenting a good price of feature is important, nevertheless you also must warrant your choices. So, the creator really need to pay for all the second facts within the solution together with the trouble shooting methods.
Another thing to give thought to is that you will want to encourage the reader how the rewards that you really get hold of are legal and good. For a few people, it’s a little something we need to put up with in order to hold some feeling buy and success inside our day-to-day resides. You can actually sequence a large number of is most effective, consisting of benefit due diligence and educational documents of the organize.
Clearly paying an effort to extended the expression is a popular approach for performing your little to make the arena you live in. You are able to also evaluate the considered positioning marketing campaigns in magazines of different towns and cities and areas. There’s absolutely no way we won’t have your lower back when you first arrived at get a hold of guidance.
Give kinds of the latest and inventive concepts that you may have normally used in order to complete the task. It is not really embarrassing to request facilitate you may generally lose more. For that subsequent area of the thought, it is advisable for being minimal and give an affirmative solution.
Distributing pamphlets at many types of areas at the metropolis in addition to diverse towns and cities is additionally a very nice picture. On top of that, last second adjusts can have an effect on the grade of try to an exquisite extent. In many varieties it’s oftentimes difficult to nail the beginning and conclude about the said part, which isn’t the truth for our waterfall device.
In any basic feel, the literature reviewed is simply a scientific studies report. Sometimes it isn’t all to easy to make themes what is the best to write down. Ample experience when it comes to the topic of the say is obviously required for any publisher.
If you’re participating in empirical explore, this really should be fairly simple, since the variables must be chosen on your study proposal. Within this part of your strategy chapter, additionally, you will want to let you know that you arrived at your findings and in what ways they’re responsible.
The main advantages of Strategy Area
All the different tools engaged on the job doesn’t carry on growing with each day, given that the planning for the same is completed at the beginning of the step alone. Right away, you could buy skilled old fashioned paper facilitate and have the ability to do each of the plans. Creator, particularly if you know once your newspaper for fee exempt perform well, your money-back again make sure.
The Concealed Value of Method Part
On the other hand there are various guidelines to market your classes, putting into action them is particular concern. It’s critical for individuals to make sure that deep-down they do know the scholastic base for picking particular routines of groundwork. Some people learn that it’s much easier to generate the benefits when the dissertation is done and also understand what the complete dissertation shows, whilst some might choose to get started with the the introduction in early stages in crafting progression so that you can possess some understanding of the direction on the endeavor.
If it concerns sincere educational papers that affect your possible vocation, it’s much more crucial to pick out specialist term paper facilitate. Coming up with a fundraising events charm is useful if you would like you should try assorted fundraiser hints. Resignation note is known as an specialized politeness note to inform the current employer, that you’re resigning out of your get the job done additionally the answers with your resignation.
A really good specialist will forever allow you to know more about the boundaries of her or his homework. When you choose a report freelance writer from your help, you don’t only seek out the assistance of an experienced professional, you take on a person who appreciates what they’re carrying out. You should consider that your very few viewers on your dissertation won’t be professional on your own particular field of look at, and you simply must make them fortunate to grasp the notion of your own dissertation starting.
This dining room table amounts up important particulars about every vision, and ought to help you out find that is most useful for the topic of analyze. Theory mapping is a fairly pain-free plan of thoughts and key phrases right into a hierarchical format, you simply must prepare it with plenty assistance since a good concept road map getting let me tell you-picked out backlinking thoughts is stylishly detailed and has profound significance. The suitable system from a dissertation is just one of the fctors tht will ffect the result of your task.
Second of all, you must avert any penalty charges of plagiarism, as it’s an extreme offense and can cause you to pay out fines. In the event of plagiarism penalty charges, you need to deliver a plagiarism state proving your cases.
As a result of our practical experience, it is possible to commonly protected remarkable acadmeic written documents when utilizing the top special authoring service. As an example, had you been attempting to get details about looking tendencies, you will get amazing comes from a many different-variety set of questions than with a string of start interview. Aside from that, our crew of writers has access to the world’s major libraries and directories.
2 notes · View notes
luciasaestetablog · 2 years
Text
Film Critique - The Coven of Sisters
Two weeks ago (March 21, 2022), we were assigned to watch a film and submit a paper that would give critique on the movie. We were given three films to choose from, those being ‘Coven of Sisters’ (Pablo Aguero), ‘I’m Thinking of Ending Things’ (Charlie Kaufman), and ‘The Photographer of Mauthausen’ (Mar Targarona). We were then required to write a critique about our chosen film and discuss the following points in a 600-8-- word essay:
1) examine how do the specific cinematic techniques or elements of a film support the message or point it is making?
2) identify the theoretical approaches and principles applied, and
3) analyze the consistency between the theory and cinematic techniques used by the film maker to expose the film’s themes.
Below is some (if not, most) of my essay that was submitted. It critiques the film ‘Coven of Sisters (2020)′, directed by Pablo Agüero.
The film ‘Coven of Sisters’ (2020) is a Spanish historical drama that was directed and written by Pablo Agüero. The film was set in the Basque Country, an autonomous community in northern Spain, in the year 1609 and covers the story about five teenage girls who are captured and arrested because of their supposed association with the akelarre, which is a sort of magical ritual that is used to summon the devil. Even though there is no evidence that the five girls are involved in witchcraft, Judge Rostegui - a French judge that was sent to purify the region from witchcraft - is convinced that they’re witches and uses forms of torture to try and make them sign a written confession that they are, in fact, witches.
This film is able to cover the Labourd witch-hunt of 1609, in which King Henry IV sends Pierre de Rosteguy de Lancre (Judge Rostegui) to stop all forms of witchcraft in Labourd, a French part of the Basque Country. It also covers the work of de Lancre; as he has written three books that cover what he thinks happens on the Sabbath; such as the devil having sexual intercourse with women. The movie also covers the ‘solution’ that authorities in Western Europe had to put an end to witchcraft, which was to have the accused people either burned, tortured, or hanged - or possibly all three at once (as seen in the beginning of the movie).
As mentioned above, a big part of the film - or if not, the main focus - is the Labourd witch hunt of 1609. I believe that the main purpose of the film was to bring more awareness about this interesting yet tragic - tragic in the sense that so many possibly innocent people were killed, just like the five teenage girls - piece of history. The film is able to bring awareness to the Labourd witch hunt of 1609 and to both Pierre de Rosteguy de Lancre and his work in an interesting way that sparks interest to viewers so they might read more about this in the future.
The film focuses on what the women in Western Europe went through during the 1600s in terms of the witch-hunts - especially in Basque Country and the Labourd witch hunt of 1609. I think that Pablo Agüero used the Realism theory that we discussed in class to show the “ugly side of life” and to provide a platform for the issue that was the killing of many innocent people because they were suspected of being witches and practicing witchcraft.
One of the strengths of this movie was the strong representation and accuracy of witchcraft and how it was dealt with during this time period. Another strength was that it was shot in Basque Country and had Basque actors cast in the movie, which made everything much more accurate and believable. However, a weakness I noticed was the importance of the scene with Jeffrey Dean Morgan and how it relates as a whole to the storyline. I understand that the doll he found played a part in the burning of the accused witch but overall I don’t see why it was necessary to be put in the film. In conclusion, I really enjoyed the film and would definitely recommend it to other people.
0 notes
inoahfewthings-blog · 6 years
Text
Gender and Sexuality Portfolio Post Three: Connection to Popular Culture
The Evolution of Black Feminism: The Yoncé Perspective
           In today’s media, there are so many influential people that use their platform to promote certain, and sometimes hidden, agendas. They use music, radio, TV, newspapers, social media, and the internet to create a strong fanbase, and gain support for whatever motive. This status combined with use of the media can be detrimental to influencing biases in any population. However, media can sometimes be used for the greater good. When thinking of Black feminism and how it is portrayed in pop culture, one iconic figure comes to mind: Beyoncé – Queen Bey. Through her expression of Black culture and forward thinking of feminism, Beyoncé has reshaped the gender binary. The goal of this essay is to explain how Beyoncé has redefined Black feminism via revising the gender binary, specifically through critique and expansion.
         Now, Beyoncé, being an international entertainer for over 20 years, has published many songs, videos, and performances. I looked at music from her Destiny’s Child days up until the late 2000s, but realized that during those times, she completely fell into the gender binary (possibly) to gain a larger following to the Beyhive. For that, I examined albums from this decade because her feministic approach is much more pronounced. The two songs that will be analyzed in this essay are “Flawless” from her self-titled album, Beyoncé, and “Formation” from Lemonade, her most recent solo album.  These songs make the most sense to analyze because they are like a Black feminist/womanist manifesto: they have very public messages of freedom and calls to action for women and men to defend the care and equity of women of color everywhere.  
          In the first wave of the Yoncé perspective, Bey promotes her personal beliefs of feminism (without huge emphasis on Black culture). Before getting into the beautiful and strategic portrayal of Black feminism in “Flawless”, the problematic retelling of the gender binary will be addressed first. Bey says, “…this diamond, flawless/my diamond, flawless/this rock, flawless/my rock, flawless/I woke up like this/I woke up like this…”, pointing to her large wedding ring and running her hands up and down her body (Beyoncé, 2013). This part of the chorus retells the gender binary through prescription; it sets the superficial standard of beauty and body type while emphasizing luxurious things that women should desire and achieve (Foss, Domenico, & Foss, 2012). However, this example is so minuscule compared to the dominant message of the song which revises the gender binary. Featured on the song is Chimamanda Ngozi Adichie, Nigerian novelist and feminist. She overtly critiques society’s double standard of behavioral expectations between men and women. The following excerpt is featured on “Flawless” and is originally presented in Adichie’s TED Talk, “We Should All be Feminists”:
           We teach girls to shrink themselves to make themselves smaller. We                   say to girls, "you can have ambition, but not too much. You should aim                 to be successful, but not too successful, otherwise you will threaten the               man". Because I am female, I am expected to aspire to marriage. I                       am expected to aspire to marriage. I am expected to make my life                         choices always keeping in mind that marriage is the most most                             important. Now marriage can be a source of joy and love and mutual                   support, but why do we teach girls to aspire to marriage and we don't                   teach boys the same? We raise girls to see each other as competitors,                 not for jobs or for accomplishments, which I think can be a good thing,                 but for the attention of men. We teach girls that they cannot be sexual                 beings in the way that boys are. Feminist: the person who believes in the             social, political, and economic equality of the sexes. (Beyoncé, 2013).
This message vocalizes gender inequality and responds with a solution: feminism. What makes Beyoncé so influential for incorporating this excerpt into “Flawless” is that it reaches all age groups. It is unlikely that a 13-year-old girl will go on the internet and search this TED Talk. However, she is likely to have Beyoncé on repeat, which would allow her to truly internalize the questions asked by Adichie. Through music, Beyoncé strategically revises the gender binary by placing this societal critique in her song so that it affects any and every age and gender that listens to her album.
           In the second wave of the Yoncé perspective, Queen Bey gravitates more to Black culture and feminism, completely paralleling her work to Black feminism. “Formation” has two powerful portrayals of Black feminism that both critique and expand the gender binary: the music video and lyrics (Beyoncé, 2016). Beyoncé is known for the visuals that accompany her music. Within the first few moments of the video, we see Bey laying on top of a sinking police car in New Orleans, LA, giving the reminder of Hurricane Katrina. It is a political statement of critique, reminding the U.S. of all the (Black) people who suffered and lost their homes and loved ones during the natural disaster with little support of the government to aid in help. A few clips later, Blue Ivy, Beyoncé’s daughter, and friends are playing around in all white dresses. This represents another critique to the citizens and federal government of the U.S. because not many people are aware of, or even care, that there has been a rise in the abduction of Black and Latino girls and women in the U.S (CNN, 2017). This is a second reminder to the United States that little Black girls, and girls of color, are important; #BlackLivesMatter. There is then a scene of Bey and other Black women sitting in a day room dressed in all white apparel from the late-1800s or early-1900s. This visual represents Black women in a stance of power; whereas they would have been slaves or indentured servants at the time, they are dressed in fine gowns, head pieces, and jewels with fans to match. Throughout the video we see Yoncé constantly changing her hair styles. Hair is huge in the Black community and for the Queen to change from a curly fro, to an elegant natural up-do, to box braids that pass the waist adds to the power of versatility in Black women. Lastly, Beyoncé gives the visual of Black feminism by the choreography of her and her dancers literally getting into formation and repeating the lyrics “‘cause I slay” and “we gone slay”. The women made solid lines and often did moves with synchronization. This is a symbol of Black women getting in line and preparing to boss up through their own personal attributes and uplifting one another as they make their way through life. This is a critique of society that wants women to compete and see each other as enemies. Bey’s choreography emphasizes the unity that Black women have simply by the identity itself.
           The omnipresence of the gender binary is smashed and revised by Queen Bey when examining the lyrics to “Formation”. In lyrical analyzation, she writes, “My daddy Alabama, mama Louisiana/you mix that negro with that Creole make a Texas bamma/I like my baby hair, with baby hair and afros/I like my negro nose with Jackson Five nostrils/earned all this money but they never take the country out me…” (Beyoncé, 2016). In these lyrics, we see how Beyoncé embraces her roots and Black culture while mentioning financial achievement. The message here is that you can be Black, have financial freedom, and never forget where you came from. This is a general uplifting to the entire Black community. The following lyrics epitomize Black feminism in pop culture because Bey completely contradicts the passive and dependent role of women in romantic relationships:
          When he fuck me good I take his ass to Red Lobster, 'cause I slay (2x)/if              he hit it right, I might take him on a flight on my chopper, 'cause I                          slay/drop him off at the mall, let him buy some J's, let him shop up,                      'cause I slay/I might get your song played on the radio station, 'cause I                slay (2x)/ You just might be a black Bill Gates in the making, 'cause I                    slay/I just might be a black Bill Gates in the making (Beyonce, 2016).
To a certain extent, these lyrics are an example of the stereotypical “strong independent Black woman” persona. However, in this case, that portrayal is appropriate and empowering. Here, Beyoncé uses the expansion method of revising the gender binary to promote Black feminism; she describes multiple ways of being a Black woman which is contrary to the stereotypical roles of women in the gender binary (Foss, Domenico, & Foss, 2012). In a heterosexual relationship, men are expected to take care of women and be the financial home base. Bey, on the other hand, expands the role of women by making herself the financial powerhouse in the relationship. The last few lyrics are important because they are the product of Black feminism in pop culture. Bey positions herself in the same power dynamic that a White male billionaire is in; she takes on the role of the most entitled and privileged human being. Furthermore, she offers that possibility to other Black women!
           In the first wave of the Yoncé perspective, we see how Bey focuses on the gender binary and its hypocritical biases. She addresses the issues without specificity of Black culture. This is to get the attention and support of all women followers. However, in the second wave of the Yoncé perspective, Queen Bey grows her feministic ideology even more, but adds emphasis to Black culture and the love of being a Black woman. This strategy was so tactful because she first gained support for gender equality and later highlighted the social issues around the Black community at large and how society has casted off Black women, all while empowering Black women to push past societal expectations and to be greater than what they have written us to be. Through revision, Beyoncé consistently critiques and expands the gender binary and gives light to Black feminism. Seeing her influence on such a topic effects any and every one that listens to her music and encourages them to take action in order to insure the equity of Black women and women of color everywhere.
Tumblr media Tumblr media
 References
Beyoncé. (2013. December 13). Flawless. On Beyoncé [Spotify]. New York City, NY: Parkwood &
Columbia.
 Beyoncé. (2016, December 9). Formation [Video file]. Retrieved from
https://www.youtube.com/watch?v=WDZJPJV__bQ
 Foss, S. K., Domenico, M. E., & Foss, K. E. (2013). Gender Stories: Negotiating Identity in a Binary
World. Long Grove, IL: Waveland Press.
 Jarret, L., Reyes, S., & Shortell, D. (2017, March 26). Missing black girls in DC spark outrage,
prompt calls for federal help. CNN. Retrieved from https://www.cnn.com/2017/03/24/us/missing-black-girls-washington-dc/index.html
1 note · View note
pandemicblog · 4 years
Text
The Relationship Between Present and Past Knowledge (Selin)
Written by Selin Üçsel
“Present knowledge is wholly dependent on past knowledge.”  In this essay, I explore this claim through the lens of Natural and Human Sciences. 
“The only use of knowledge of the past is to equip us for the present,” said Alfred North Whitehead, British mathematician and philosopher. At first glance it is conspicuous that past knowledge, or knowledge which has already been produced by others, is instrumental in producing present knowledge, or that which tries to explain a previously unexplained phenomenon. Nonetheless, when one further investigates the issue, it becomes clear that there may be cases where the production of a new knowledge can be independent of any previous knowledge, either in terms of the methodology used in producing knowledge or in the sense of not benefiting from any past knowledge. That’s why, a more nuanced approach which takes into consideration the differing degrees and roles of past knowledge in the production of present knowledge may be necessary as the interaction between the two can have important impacts on the validity and accuracy of the knowledge produced. Moving from this point, the knowledge question of “To what extent new knowledge can break away from the past knowledge,” becomes an important one to answer as this could allow us to understand the intrinsic relation between past and present knowledge, especially when the multiplicity of differing relations is considered. In order to further analyze this question, this essay will examine the Areas of Knowledge of Human Sciences and Natural Sciences.
To begin with, the production of present knowledge is based on the past knowledge of the individual. As humans gain experiences and learn new information, they strengthen the way we interpret a new input from the outer world. If prior knowledge and experiences weren’t present, there wouldn’t be any assurance of the reliability of present knowledge. Despite there are a few theories about this claim, psychology in Human Sciences is one of the best areas to explore the truth of this claim. In psychology, “schema theory” refers to a mental representation that organizes our knowledge, beliefs and expectations. It is believed that interpretation of new information is derived from our prior knowledge and experiences. “Schema is a generalized description or a conceptual system for understanding knowledge, how knowledge is represented and how it is used.” For instance,  the researchers Martin and Halverson aimed to investigate if gender schemas have an effect on recall in 5 and 6 year old children. Each child was shown 16 pictures. Half of them were a picture of a child performing gender consistent activities (boy playing with a truck) and the other half were a child performing gender inconsistent activities (a girl chopping wood). After one week, children were tested to see how many photos they do recall accurately. According to the results, children recalled the sex of the actor that was performing a gender consistent activity accurately but couldn’t remember the sex of the actor that was performing a gender inconsistent activity accurately and distorted the scene. As can be seen, the previously acquired gender roles and schemas, even at such a young age, influence the way children’s memories are shaped, having a dramatic impact on how a new knowledge is perceived and retained. That’s why, it can be said that under any circumstance the past knowledge will have an impact, positive or negative, on the production and retention of new knowledge.
However,  a new approach to knowledge production may allow the knowledge producer to break away from past knowledge and create authentic knowledge. New theories and movements could be produced uniquely, without getting affected by the previous theories and movements in the past. Specifically, this counterclaim can be explored once again in Human Sciences with examples from history. As we all know, Karl Marx is the pioneer of the ”marxism” movement. “Marxism is a social, political, and economic philosophy named after Karl Marx, which examines the effect of capitalism on labor, productivity, and economic development and argues for a worker revolution to overturn capitalism in favor of communism.”  Marx developed this ideology in contrast to capitalism, which is a past knowledge, by negating it and producing a totally new ideology. Therefore, marxism is not a production of the past knowledge; instead, a new approach to economy, politics and society. In this sense, it can be said that certain knowledge can adopt an entirely new approach to knowledge production and benefit from a different set of assumptions that define the process. As such, the knowledge produced breaks away from past knowledge and does not benefit in any way from the past knowledge while producing a new knowledge. In other words, by working against the past knowledge, Marx’s new knowledge alters the way knowledge is produced, causing an epistemological break from the past knowledge that increases the originality and applicability of the new knowledge. That’s why it can be said that it becomes possible to break away from past knowledge in Human Sciences when producing knowledge, especially when attempting to produce new knowledge by altering the underlying assumptions of the past knowledge; however, this does not necessarily cause the past knowledge to be completely eliminated.
On the other hand, when we investigate the relation between past knowledge and new knowledge in another AOK, such as Natural Sciences, it becomes possible to claim that a new knowledge can completely alter the way past knowledge is produced, making past knowledge invalid. A suitable example to demonstrate such cases could be the adoption of the Heliocentric model. Until then, people used to benefit from their faiths and Holy Scriptures to gain an understanding into the universe, and as such, the common belief was that the entire universe revolves around the Earth. In other words, the geocentric model, supported by the Church was the dominant theory, or knowledge, regarding the structure of the universe. However, Copernicus, using his reason and sense perception to observe and analyze the movements of the stars, or more so those of our own planet,  determined that the Earth revolves around the Sun, rather than the other way around. This new theory was widely criticized at that time, since it directly contradicted the past knowledge which was defended by one of the strongest institutions of that era: the Church. Nonetheless, despite significant resistance, the Heliocentric model which places the Sun at the center of the Solar System, gained popularity as it was further backed by calculations, observations, and evidences by other scholars such as Galileo Galilei, which eventually led the theory to be widely accepted by the general population. As seen in this case, the spread of the Heliocentric model, with all the mounting evidence to support the theory, eventually led to its success against the Geocentric model and erased it from our common understanding of the Cosmos. In short, the new knowledge produced based on reason and sense-perception gained prevalence over knowledge produced by faith, causing the latter, the past knowledge, to be dropped altogether in favor of the new knowledge.
On the contrary, since knowledge production in Natural Sciences is a cumulative process, new knowledge is either wholly or partially dependent on past knowledge. In order to develop and explore the present information on a particular area, a prior knowledge is required. Specifically, this counter claim can be revealed in Natural Sciences with examples from nowadays’ news. Currently, there is a pandemic going on all around the world due to the large spread of Covid-19.  For instance, Li Wenliang is the Chinese man who revealed the existence of the Coronavirus. In Wuhan, he investigated several samples, worked on the genetic of this virus and finally came up with the release of Coronavirus’ presence. Although Li Wenliang explored the truth and the genetics of this virus, he benefited from the previous foundations of viruses and prior research based on genetics of viruses such as another pandemic hit, Influenza in 1918. This example manifests that without any prior scientific researches and investigations of pandemic hits, Li Wenliang wouldn’t be able to study the genetic and the underlying knowledge about the virus. It further accentuates that the knowledge is an accretiveprocess that depends on the previous knowledge and foundations. In short, the past knowledge produced by other knowledge producers allows new knowledge to be produced much more easily as the producer of new knowledge can benefit from the proven knowledge of past producers while trying to uncover the mysteries of the new case at hand.
Consequently it can be said that the relation between past and new knowledge seems delicate. In certain cases, such as the schema theory, the past knowledge can influence our present knowledge, knowingly or unknowingly, whereas in other cases, such as in Marxism, the new knowledge can break away entirely from the past knowledge to gain legitimacy and acceptance while developing a critique of the former. Nonetheless, in other  cases, as seen in Natural Sciences, while this pattern of severance from past knowledge seems prevalent, in other cases, such as in the Covid-19 crisis, the past knowledge can create the foundation of present, or new knowledge, making its production impossible without benefiting from the past knowledge. Hence, it is more logical to evaluate the relation between past and present knowledge on a case to case basis in order to determine its impact more precisely, and this approach can be applied to Art where certain artists may benefit from past movements when producing new works, whereas others, such as Jackson Pollock, may break away from the past knowledge and tradition to create authentic knowledge.
0 notes
deek9091 · 4 years
Text
Byte evaluations
Byte evaluations 2021
Byte evaluations2021 can be taken into consideration truly each different invisible aligner organisation to maximum clients, but it's far ways more than genuinely that. It makes use of the equal procedures as maximum tooth aligner groups, but, it presents pace to the combination thru making use of superior generation referred to as HyperByte™ (you can look at greater approximately it in our Hyper byte review). So, what’s the real distinction amongst byte and different agencies? There are at the least 4 things that separate byte from the opposition and that still make it one of the incredible invisible aligners providers for us, which incorporates their method for identifying your aligners’ form through utilizing your whole face, having Dr Jon Marashi as co-founder and leader government officer of byte who has helped many Hollywood celebrities, their HyperByte extended enamel alignment machine that cuts remedy time in half of, and their “byte for lifestyles guarantee™” that guarantees you continuously have the smile you preference for the relaxation of your existence.
We’ll go through those unique and greater in our byte assessment, so keep reading to analyze the whole thing approximately the emblem.
What Are Byte Aligners?
The byte is a corporation that makes a speciality of at-home teeth straightening with clean, almost invisible custom in form aligners. additional time, your tooth circulate into the proper function, enhancing your smile.
according to Byte, its straightening treatment manner is quicker than its competitors, and it's an entire lot quicker than traditional braces. instead of carrying braces for years, Byte's aligners get consequences into four months.
as well as greater affordability and faster consequences Byte moreover gets rid of the need for office visits. Dental professionals oversee your remedy plan, however, you never must visit an orthodontist in person.
Byte Product statistics
The byte is designed to be used by folks who be troubled by way of slight to slight teeth misalignment. much like braces, this product works employing using slowly forcing the enamel into their accurate positions and bearing in mind them to align the manner they may be alleged to. in comparison to braces, but, there may be no want to head in and observe your orthodontist for tightening or take a look at-ups, and alternatively you work with a set of aligners which you change out regularly.
the whole lot is completed online, and also you start through manner of ordering an effect the package which permits you to chunk down and create a mould of your teeth. you then deliver this again to the enterprise corporation, where an orthodontist will artwork collectively together with your mould to create the correct set of aligners for you.
as soon as your mildew has been reviewed and your plan has been created, you will be sent a 3-D model that you may look at an awesome manner to peer exactly what your teeth will look like by the point your treatment is completed. if you are glad about the layout, Byte will then deliver you your set of aligners, and you may start the system.
all the aligners that Byte sends are made from BPA-loose plastic, this means that that they may now not harm your fitness in any manner. BPA has been appeared to mimic various bodily hormones, impacting a person’s estrogen stages and extra. making sure your plastic is loose from this will prevent anything harmful from coming into your device whilst you wear them, a totally critical factor while it is a product you put on inner your mouth.
apart from the fact that they're BPA-unfastened, Byte does now not expose another information about the aligners and in which they're sourced from. This loss of transparency is a piece worrisome, but, there is nothing to suggest that they're the usage of anything apart from an excellent delivery. while entire disclosure is regularly liked, many corporations do try and keep these facts to themselves to keep away from being copied by way of competing manufacturers.
BYTE patron experience
turning into alternatives - 5 / 5 byte customer service may be reached each day, from eight am to 5 pm, Pacific Time, constant with their Facebook internet web page. customers can touch them with the aid of the manner of mobile phone or email, similarly to their social media debts on Facebook, Twitter, and Instagram. inside the approach of penning this overview, we took a study any consumer reviews to be had on-line. For byte being an enormously new employer, over eighty patients have really reviewed them on Google and Facebook with many others informally commenting. The summary of these first-rate evaluations is from clients usually praising how smooth it has become to complete the system and how helpful customer service modified into.
final issues
The byte is the maximum of the main manufacturers in terms of customer accept as genuine with. With charges above the marketplace common. Byte enjoys sizable trust from public figures and customers. Even more, with precise advantages consisting of HyperByte technology, the invisible aligners may be the right desire for the better-searching tooth. To get started, Byte customers want to be in extraordinary dental fitness and it’s advocated that they've visited a dentist inside the past 6-8 months earlier than embarking on a treatment. the coolest data is that maximum treatments require much less time as compared to Byte’s alternatives. Byte critiques 2021
Byte® can be considered really each different invisible aligner organization to most consumers, however, it is approached more than truly that. It utilizes the same techniques as maximum tooth aligner corporations, however, it affords pace to the mixture via making use of superior technology referred to as HyperByte™ (you could take a look at greater about it in our Hyper byte assessment). So, what’s the real difference between a byte and different organizations? There are as a minimum 4 matters that separate byte from the competition and that still make it one of the first-rate invisible aligners providers for us, which includes their technique for identifying your aligners’ shape through utilizing your complete face, having Dr Jon Marashi as co-founder and chief authorities officer of byte who has helped many Hollywood celebrities, their HyperByte accelerated teeth alignment system that cuts treatment time in 1/2, and their “byte for lifestyles assure™” that guarantees you continuously have the smile you preference for the relaxation of your existence.
We’ll undergo these unique and extra in our byte assessment, so preserve studying to research the entirety approximately the brand.
What Are Byte Aligners?
The byte is an employer that makes a speciality of at-domestic tooth straightening with clean, nearly invisible custom in form aligners. overtime, your enamel circulates into the proper function, enhancing your smile.
in step with Byte, its straightening remedy way is quicker than its competition, and it's an entire lot faster than conventional braces. rather than sporting braces for years, Byte's aligners get outcomes into 4 months.
as well as greater affordability and speedier effects Byte moreover receives rid of the need for office visits. Dental specialists oversee your remedy plan, however, you in no way need to visit an orthodontist in man or woman.
Byte Product information
The byte is designed to be utilized by people who be afflicted through slight to slight tooth misalignment. just like braces, this product works by way of the usage of slowly forcing the teeth into their correct positions and taking into consideration them to align the manner they will be speculated to. in evaluation to braces, but, there can be no want to move in and word your orthodontist for tightening or test-ups, and rather you work with a fixed of aligners which you trade out on an everyday foundation.
the entirety is finished on-line, and also you begin by way of ordering an effect the package which allows you to chunk down and create a mould of your tooth. then you definitely deliver this once more to the business company, wherein an orthodontist will paintings collectively together with your mould to create the proper set of aligners for you.
as soon as your mould has been reviewed and your plan has been created, you will be despatched a 3-D model that you could observe a good manner to peer exactly what your enamel will appear like by the point your remedy is completed. if you are glad about the layout, Byte will then supply you your set of aligners, and you could begin the process.
all of the aligners that Byte sends are made from BPA-free plastic, this means that that they may no longer harm your fitness in any way. BPA has been regarded to imitate numerous physical hormones, impacting a person’s estrogen tiers and extra. making sure your plastic is unfastened from this will save you something dangerous from entering your machine at the same time as you wear them, a totally vital component when it is a product you placed on inside your mouth.
other than the reality that they may be BPA-free, Byte does not disclose another statistics approximately the aligners and in which they may be sourced from. This loss of transparency is a chunk worrisome, however, there's not anything to suggest that they are using something other than an amazing deliver. while entire disclosure is often preferred, many groups do try and hold those data to themselves to avoid being copied through a way of competing producers.
BYTE client experience
turning into options - five / five byte customer service may be reached each day, from 8 am to five pm, Pacific Time, consistent with their Facebook net web page. customers can touch them employing a way of cellular phone or email, further to their social media debts on FB, Twitter, and Instagram. in the method of scripting this evaluate, we took a take a look at any customer evaluations available on-line. For byte being an exceedingly new corporation, over 80 patients have in reality reviewed them on Google and Facebook with many others informally commenting. The precis of those great evaluations are from customers generally praising how smooth it has become to complete the device and the way useful customer support modified into.
very last issues
The byte is the maximum of the main manufacturers in phrases of client accept as genuine with. With fees above the market not unusual. Byte enjoys considerable belief from public figures and customers. Even extra, with precise benefits which include HyperByte generation, the invisible aligners may be the right preference for the better-looking enamel. To get started, Byte clients need to be in top-notch dental health and it’s advocated that they've visited a dentist in the past 6-8 months before embarking on a remedy. the good statistics is that maximum treatments require much less time compared to Byte’s alternatives.
0 notes
Video
youtube
Tumblr media
college papers for sale
About me
Psychology Research Paper Topics
Psychology Research Paper Topics Academic writing is formal in tone and mustn't embrace slang, idioms, or conversational language. Olivia Valdes is the senior editor of ThoughtCo and the founding father of Zen Admissions, a college admissions advising service. The major level of selecting a analysis matter is to go for aspects of the subjects that you just understood higher and that which interests you probably the most. What’s nice a few trainer asking you to provide you with a research matter is that you can go in any direction that you really want. You can take a look at imagery throughout the novel, or focus on the allegory of two random but associated novels. Discussing the controversial features helps to determine the principle gaps that need to be labored upon. This is important for defining the problem statement of the research and highlighting the significance of the analysis underneath query. Literature reviews can be categorized as experimental and theoretical. Experimental literature review mainly refers to surveying all the knowledge available on a particular matter and critically analyzing the gaps that need to be labored upon. In this sense, it basically forms the first experiment of any analysis project. Such phrases are unnecessary and so they sound unnatural. You don’t must say apparent issues to write down a great conclusion. If you make a name to action, you need to clarify what actions you consider the most important or effective. Write a narrative or an anecdote in the introduction, but don’t inform how such a story ends. The conclusion is the last, yet not the best part of a research paper. The extra in depth the evaluate, the more exact and systematic the analysis project shall be. Therefore, it is one of the most important parts of one’s research. Infoplease is a part of the FEN Learning household of educational and reference websites for fogeys, lecturers and students. We determined to provide you with useful tips about what to do so as to make your conclusion effective, and what to avoid. Narrows the subject all the way down to a specific focus of an investigation. Another choice is to think of a thesis assertion as one full sentence that expresses your place. Do not merely repeat the ideas or arguments from your supply supplies. Rather, analyze those arguments and clarify how they relate to your point. Body is often used for describing the completely different themes under the primary subject by dividing them into completely different subheadings. This part compares and contrasts printed research and identifies gaps that haven't been addressed or have been unsuccessfully addressed. These are theoretical discussions of relevant info on a particular subject and its critical analysis. These are principally qualitative in nature much like the evaluation sections of larger articles. The evaluation should include a comparability and distinction of various research. An effective literature evaluation is necessary for the entire life cycle of a analysis from defining the best analysis objectives to correctly decoding and presenting the analysis results. This part differs slightly between critiques which are a part of analysis articles and narrative critiques. The section describes the principle conclusions from evaluation of all the current research and places forth further avenues for research. This part requires important interpretation by the writer such that the review provides value to existing literature. It ought to bring out ideas/hypotheses that can clarify any discrepancies and supply options to current problems. It must summarize the entire paper and clarify its primary function. Any conclusion consists of some elements, so you can select different approaches depending on your agenda and the paper type. Use transition phrases to make sure a easy flow of concepts from paragraph to paragraph. Don’t begin your conclusion with the phrases “in conclusion”, or “in abstract”.
0 notes
isessay · 6 years
Text
Improving academic writing skills
To be an outstanding writer, one should focus on improving their academic writing skills. This can be done by focusing on three key areas as follows: A writer should focus on presenting clear work. Clear writing should also be the focus for every online research proposal writing service provider. Outstanding writers put much of their time searching and distilling information as well as, reviewing key pointers from the relevant literature. To be able to engage in clear writing, one needs to come up with an outline to guide the writing process. The outline should be a summary of the thoughts relevant to the research areas. Research proposal writing also necessitates coming up with an outline of the subtopics to be addressed in the work. To succeed in academic writing, solid planning should be necessary do item before venturing into the actual writing.
Tumblr media
Having a command of the English language is not a very easy task especially for the people who are not native English speakers. However, even scholars and native English speakers take time before they can gain a masterly of the language. Writers should take time to learn the main issues relating to good grammar in order to improve their grammar expertise. This calls for intense practice through writing and use of academic support systems. One can take advantage of cheap essay writing service to get supplementary support when need be. To stand out in academic writing, one should be consistent in the stylistic approach they use. The writer should select one writing style and stick to it throughout the paper. Consistency is key as the different styles postulate different guidelines on how to write citations, foot notes, page numbers, references and so on. Consistently adhering to one style of writing helps the narrative have a flow and improves its appearance and reliability. It is also important to note that some disciplines require particular styles. Sciences for instance are known for their use of annotated bibliographies. Given the complexity that comes with this particular type of academic piece, one may turn to annotated bibliographies writing services online to get more insights. A key approach of evaluating the quality of work is by proofreading the work before submission. Proofreading is an important practice for writer to adopt, as it helps the writer to analyze and critique the flow of the narrative in order to establish whether it is accurate and precise. Original Source: https://goo.gl/uEAv5i
0 notes
atrocitycl · 7 years
Text
TWICE - “Only You” Review
(Audio—unofficial upload)
TWICE – Only You
Reviewed on May 17, 2017
Tumblr media
For this review, while we will always look at both the strengths and weaknesses involved, I find that a more productive discussion will come from heavily focusing on why the song is, indeed from my argument, good. In particular, I wish to tune into certain strategies employed; specifically, we will discuss the role of the instrumental’s heavier bass, the diversity of the vocals and how said vocals are accommodated for the ladies’ vocal abilities, and how the song itself is structured and controls its flow for certain effects.
Personal Message: First of all, I do wish to make this explicit: I will not be reviewing “Signal,” TWICE’s actual comeback song. Now, should readers or TWICE fans be incredibly curious, I am open to reviewing it only if a request is sent. That said, to share my thoughts on “Signal,” it sadly is a disappointing song akin to—harshly stated—the rest of TWICE’s title songs excluding “Knock Knock.” “Knock Knock” is still one of my favorite songs in general, and even in a more critical lens, I argue “Knock Knock” (as I did in my review of it) is incredibly well composed especially for how the song adopts a very “stereotypical” pop style. But regarding the original topic, indeed I find “Like Ooh-Ahh,” “Cheer Up,” and even “TT” to all be weaker songs in general. In fact, I have already reviewed all of them minus “Cheer Up,” and thus if readers are curious on my take, I suggest readers reading the respective reviews. Unfortunately, while “Knock Knock”—again, in my argument—broke the chain of weaker songs, “Signal” is a return back into said weaker songs. JYP’s composition of the song is, with all due respect, highly questionable. Even more critically and to perhaps overstep my boundaries, I will even go as far as to say JYP’s composition skills in general has tended to be weaker. I personally have yet to find a song composed by JYP that has stunned me, and I am afraid “Signal” might be one of his weakest works as of yet. (Again, this is not to say JYP is necessarily a bad composer; he definitely is very knowledgeable and experienced, and quite obviously, is far superior to I in this field. Thus, I hope readers interpret my words as a form of a critical, bold critique rather than insults.)
On topic for this review, I admit I am doing acts that are quite peculiar: for one, I am reviewing a song despite not letting at least three days pass in order to remove instances of extreme bias taking place; secondly, I am reviewing a side-track—a song that is a part of the album but is not the main title. Humorously, though, I argue “Only You” should be the title track; if this was TWICE’s comeback for the upcoming summer, this would have potentially shocked many people in terms of music quality. (To clarify, “Signal” is “marketing” well; their comeback, despite me rendering it as a very poor song, is surprisingly holding number one on music charts and is garnering many sales. The issues, though, are that other music reviewers have rated it poorly and that many listeners find it a weaker song—this being indicated by an absurdly high dislike ratio on the music video.) The ladies, after all, are oftentimes critiqued for poorer vocal execution and even having overly generic pop songs. In fact, to some extent, even I also agree with these criticisms. That said, “Only You” is—once again, in my argument—the best song TWICE has ever released both in terms of composition but also vocal execution. It truly is an amazing song that showcases TWICE’s vocal skills at their peak, and the composition is one that drastically deviates away from the more generic pop style that group has claimed as their signature sound and style.
Finally, before getting right into the review, I will leave a disclaimer. If readers have noticed from the blog’s side information, “Only You” is currently my favorite song of all-time. Without getting into the critical aspect yet, I will share that “Only You” matches my music preferences perfectly: it follows an upbeat pacing all while remaining “linear” akin to ballads; the vocals consist of both simple yet complex lines and have both rapping and singing; the instrumental focuses on a heavier bass line; and so on. The point being, even without focusing on the actual composition at hand, I admit I very much biasedly enjoy this song as its sounds and flow are what I prefer in songs and I do hope readers realize I could easily be biased within this review—more so as I am not waiting for the excitement to die down via waiting at least three days before reviewing. However, of course, with now feeling relatively confident in analyzing songs in a critical manner—or more realistically speaking, me being a foolish and arrogant boy—I believe that I can review the song without entirely projecting my biases. Regarding the composers of this song, while I seldom do explicitly give credit, I wish to do so here: David Anthony Eames, Debbie—Jane Blackwell, and 72 are the composers of “Only You.” These are the men and women who authored this very song. Their work has brought what I argue is TWICE’s best song of all-time.
For this review, while we will always look at both the strengths and weaknesses involved, I find that a more productive discussion will come from heavily focusing on why the song is, indeed from my argument, good. In particular, I wish to tune into certain strategies employed; specifically, we will discuss the role of the instrumental’s heavier bass, the diversity of the vocals and how said vocals are accommodated for the ladies’ vocal abilities, and how the song itself is structured and controls its flow for certain effects.
_______________________________________________________
Song Score: 7/10 (7.00/10 raw score) - “Above average”
- Vocals: 7/10
- Sections: 7/10 (7.13/10 raw score)
Introduction, Verse, Pre-Chorus, Chorus, Post-Chorus, Rap, Pre-Chorus, Chorus, Post-Chorus, Rap, Bridge (Chorus), Chorus, Conclusion (Post-Chorus)
1.     Introduction: 7/10
2.     Verse: 7/10
3.     Pre-Chorus: 8/10
4.     Chorus: 7/10
5.     Post-Chorus: 7/10
6.     Rap: 7/10
7.     Bridge (Chorus): 7/10
8.     Conclusion (Post-Chorus): 7/10
- Instrumental: 7/10
- Lyrics: 7/10
Only, o-o-only, only you Only, o-o-only, only you Only, o-o-only, only you Only you
It's weird, I didn't imagine it at first Just like a sugar rush You slowly approached me and knocked on the doors of my tired heart Is that when it started? I thought you were a good friend Before I knew it, I kept thinking of you My cheeks get red and I only smile The love's already begun
You're sweet you're just like chocolate candy I can't hide anymore, I want to show you how big my heart has grown Let's go
Always stay with me, don't leave me boy Now I know with my heart I've got only you Look into my eyes, boy, it's full of hearts No matter what anyone says I've got only, only you
Only, o-o-only, only you Only, o-o-only, only you Only, o-o-only, only you Only you
I've got O-N-L-Y you Don't ask why Real love has no reasons Call me "jelly" Call me every day Our secret code fluttering love mode Baby, baby, tell me, what do you think? Why is it so complicated, you want me too You pretend you don't but I can tell, I can tell that's right Now come here, kiss me, butterfly
Ooh I tremble when we brush a little Ooh When we touch a little my heart goes Kung*, kung, kung, it's for sure L-O-V-E Let's go
Always stay with me, don't leave me boy Now I know with my heart I've got only you Look into my eyes, boy, it's full of hearts No matter what anyone says I've got only, only you
Only, o-o-only, only you Only, o-o-only, only you Only, o-o-only, only you Only you Only, o-o-only, only you Only, o-o-only, only you Only, o-o-only, only you Only you
Okay, let's go Don't make me wait any longer, I've only got one answer (Only you, always you) Ticklish first love makes me feel like I'm gonna fly (Only you, always you) Hold my hand tightly, never ever let go (Only you, always you) You and me against the world, I'm not afraid (Only you, always you)
Always stay with me, don't leave me boy Now I know with my heart I've got only you Look into my eyes, boy, it's full of hearts No matter what anyone says I've got only, only you
Always stay with me, don't leave me boy Now I know with my heart I've got only you Look into my eyes, boy, it's full of hearts No matter what anyone says I've got only, only you
Only, o-o-only, only you Only, o-o-only, only you Only, o-o-only, only you Only you Only, o-o-only, only you Only, o-o-only, only you Only, o-o-only, only you Only you
*Akin to how this same word was used in TWICE's "Knock Knock," "kung" is the Korean linguistic representation of a "bam"-like noise. English equivalents in this case would be, for examples, "pit-pat" or that one's heart goes "thump thump."
_______________________________________________________
Analysis: Now that all of the analytical work is actually done, I confess: I am surprised the song in whole rates only at a seven. Not that that rating is bad at all; indeed, if anything, getting an “above average” rating nowadays is almost a feat when considering how I am now incredibly critical of pop songs. That said, I personally anticipated “Only You” scoring an eight. Critically, of course, this song does not come close to making it that far though this is not to discredit how dazzling of a song it still is.
Onto the review, as there are too many aspects to this song to discuss and in fact, many of which we will not even have time for in this review (as, again, I am focusing on being concise for reviews rather than sharing an entire dissection of a song as I used to in the past), we will stick to what I established earlier in the review. Already, one key element to “Only You” is its instrumental and more specifically, how it cleverly and effectively manipulates its heavier bass line. For example, with the bass line serving as the distinctive foundation and even sound to the instrumental, it allows the song to easily make core transitions. Let us take a look at the rap sections as these provide an excellent background to why the instrumental’s heavier bass is vital. With the rapping, they notably are somewhat odd sections in terms of not just the initial transition, but also how the instrumental during these moments entirely deviate from what has already been established. After all, the first rap in particular seems to adopt a deeper and more distorted instrumental than the rest of the song—of which features a lighter instrumental sound. While listeners might view this as a possible point of critique, tying back to my argument however, and we will actually find that how the composers manipulate the heavier bass allows this moment to be saved. The first rap’s bass line—despite its deeper sound and distortion—is still recognizable as being the same bass line that is already at play throughout the song. Thus, this bass line serves as reference point: it is the same bass line that listeners can easily seek out, but merely changed in its sound. And especially if we scope out further, we realize this bass transformation parallels the song in its entirety as the first rap section is still following the same flow, though with a changed sound. In other words, this example is merely one that showcases how the instrumental—particularly the reliance on a heavier bass line—is used by the composers in a structural sense of keeping the song organized. Of course, though, sonic benefits exist as well such as how the instrumental covers the lower range of sounds especially as TWICE’s vocals focus more towards the higher end barring perhaps the pre-choruses. We will not dive in much further details here, however.
Regarding the next focus, the vocals and the diversity within this field are rather impressive. On a simplistic level, there is praise for how “Only You” is able to incorporate both rapping and singing, but more importantly, that within these very branches that there are still variances within. For example, the first rapping focused more on power and flow while the second rapping focuses on building up the song. Similarly, the singing ranged from stronger vocal beltings such as at the choruses to smooth, slower and rhythmic singing such as at the verses and pre-choruses. Overall, then, especially as vocal variety tends to be a key appeal in pop songs, we can already understand why “Only You” sounds great vocally as it covers a wonderful range of styles that will very likely hold listeners’ attentions.
All that said, this is the minimal aspect that I wish for us to home in on. What I find more relevant for discussion is when we actually analyze the vocals in the context of TWICE’s vocal roles and abilities. Particularly, I find that “Only You” truly accommodates TWICE’s vocal strength and weaknesses in perhaps the most effective way I have heard as of yet. First, though, we have to understand what I connote when I say “vocal roles.” As some readers are aware of, groups tend to be split with “main vocal,” “lead vocal,” and “sub vocal.” The former, in short, are members who can handle more complex and difficult singer while the latter are members who tend to be weaker singers. The middle role is, quite literally, the actual middle ground between the mentioned two. While I do not wish to necessarily begin a debate regarding which member in TWICE has which roles (these are “official” but I personally have my own mental list as I find this to be far more accurate than what official websites claim), I do wish to focus on how in “Only You,” the sub vocalists are finally singing in an appropriate context.
I think it first makes more sense to discuss the opposite, however: discussing examples of when the sub vocalists did not get to sing in their right contexts. “Cheer Up” is one example in that Sana’s “shy shy shy” line, while it is now a pop cultural phenomenon, is a very weak musical line that does not showcase her vocal skills at all. Another example is how in “TT,” both Momo’s and Sana’s pairing at the pre-choruses are also a moment of weaker vocals. What is troubling—and to get to my main point—is that the sub vocalists in many past songs have been delegated to lines that are not musically-orientated at all per se. Finally, though, in “Only You,” the sub vocalists have lines that are much more intensive and complex in comparison to their other songs but most importantly, “Only You” ‘s lines that involve the sub vocalists actually involve them singing as harsh as that may sound. And indeed, they definitely can sing and the sub vocalists of TWICE delivered many impressive, fluid and lower-pitched lines for “Only You” and that deserves credit as their parts are as vital as, for example, Nayeon’s and Jihyo’s lines at the choruses (and equally Jeongyeong’s beltings throughout the song). In summary, “Only You” showcases not just vocal variety, but it also showcases excellent vocal execution in the first place and that is highly emphasized due—especially when contrasting prior songs—to how the sub vocalists finally have musically-based lines rather than being restricted to catchy, filler lines as they historically have been.
Lastly, for the last point I will discuss, the song itself was brilliantly structured. Specifically for what I wish to discuss, the composers’ ideas on how to control the song’s flow are very impressive. Although I do wish to discuss each and every section, realistically it would be best to discuss merely one: the pre-choruses. I will even go to the extremes of claiming that the pre-choruses in “Only You” are the best ones I have ever heard in any pop song. In terms of what actually occurs during these parts, nothing mind-blowing happens at all despite how much praise I am giving. Summarizing the pre-choruses, they follow this overall strategy: The pre-chorus in this song, first of all, is the moment when the vocals and instrumental begin to mix together—as indicated by the slower pacing and how the vocals are now lower-pitched to suit with the bass line and that the rhythm becomes a prominent feature. Structurally, though, the pre-chorus relaxes the song via slowing down and, once again, switching focus to the beats and rhythm that soon begin to accelerate and climax in the form of the choruses. Again, nothing is unique at all for the pre-choruses, and yet I am very surprised. What I actually find delightful is how the pre-choruses utilize two different types of build up: the pre-choruses both relax the song, but equally within the same space and time, the sections soon build up the song back into a heightened state in a very natural, seamless manner due to how the aural component meshed both vocals and instrumental. Typically only one type is used. Using “TT” once again as an example, in that song we find that the pre-choruses focus on hyping up the song—but, quite clearly, the pre-choruses in that song do not do both. Another example in mind is the recent review of VARSITY’s “U R My Only One.” In that song, the pre-choruses “downgrade” as its form of building up the song, and though it admittedly does heighten the song back akin to “Only You,” it does so in a very rigid, explicit manner while “Only You” is able to do this without even attracting attention to this very strategy.
And, while I seldom critique lyrics as of the late—perhaps, in once again a harsh manner, due to the fact that many lyrics of recently reviewed songs are all average—“Only You” has solid lyrics. The fact that the verse and raps are not repeats and that even the pre-choruses use different details rather than typically just recycling the same section helps bring the lyrics up in rating. Furthermore, though the following does not account at all for the score as it is the variety of details I care for, I hope readers also enjoyed the song’s lyrics in a romantic sense. The plot is quite endearing and sweet and in the overarching view of “Only You,” the lyrics holding up well serves as the final, pretty wrapping to the song.
Praises aside, however, I still find it crucial that we discuss the faults of the song. The main fault I have is how the last rap and final portion of the song begin to create a tedious sound. With the last rap, we find it alternating between rapping and chanting, and though this makes sense on a structural level, chanting in songs are always at risk due to how they can easily create a mundane sound if not balanced appropriately with some other factor. Now, the composers did attempt that very act of balancing via using the chorus as the bridge—thus, it counters the chanting’s stale phrases through a very delicate, tuneful section. However, the somewhat comical aspect is that this solution now creates another problem: that a follow-up chorus—the regular one—is used to get the song moving again. The issue here, of course, is that given the nature of the choruses in that they are lengthier and rely on a linear flow, having two back-to-back choruses becomes overly dragging of the song. While ultimately these faults are not significant in the main view of the song, it still is noteworthy as “Only You” starts off impressive but begins to languish as the song runs. It is always desirable for songs to get better the further it gets, such after all is the iconic structure of ballads, and thus the fact that “Only You” does the opposite can definitely be concerning.
All in all, “Only You” is an amazing song. It personally is my favorite song of all-time, and I expect it will stay that way for many months if not even at least a year. Additionally, regardless of what my favorites are, I personally argue it is TWICE’s best song. Should fans ever desire to mute the mouths of those who are critiquing the group’s music without being critical (emphasis: without being critical; it is fine to critique TWICE’s music as I do because I am being critical and respectful), the song should be self-explanatory and can easily be used to defend the ladies. What upsets me the most is how this song is not the title and comeback track; it possesses a “summer sound” that would fit the upcoming months, and with its style of infusing ballad-like elements while containing the usual pop sound of TWICE, it truly astounds me that “Signal” was chosen over this song. (But given that “Signal” is composed and produced by JYP, their CEO, it perhaps makes sense on why that song is privileged as the title song.)
As for final remarks, TWICE is a group that does have music potential. Understandably, TWICE has been historically a weaker group musically speaking as, I argue, all of their title tracks are poor excluding “Knock Knock.” But, especially with extreme hate spewed at the ladies on a personal level, I wish to remind readers—whether fans or non-fans of TWICE—that criticism can only be kept in an art-based context. In other words, their dances, music videos, and songs can be and should be critiqued. What is not ethical is when listeners decide to attack the ladies personally (especially as some might feel that, understandably, it is “unfair” that TWICE is quite popular despite being relatively musically weak). Never should the ladies themselves be attacked unless if that genuinely is a case, though the likelihood is already near impossible. (For example, if Jihyo is found to be an abusive leader and constantly physically beats the members, then of course she can be personally critiqued. But of course, this is a silly fake example and Jihyo would never do such, but the point is that TWICE can only be critiqued musically and not personally as, from my understanding, they truly are upright women who attempt to always do as much good for the world as they can.)
Overall, while I personally will forever remain critical of all of their title tracks barring “Knock Knock,” TWICE is a group I would consider myself a fan of. They do have a lot of music potential—this we hear in “1 to 10” or in “Only You” for examples—and I hope we will hear more of a musical TWICE and less of a “generic pop group” TWICE. It is a tough situation, however, as sheer popularity appeal via catchy songs is what made TWICE get this far (and, pessimistically said, music quality in the pop scene does not get as much respect as it should be as fans care more about the aesthetic pleasures instead—which, again, is understandable). But indeed, I remain optimistic that TWICE will head towards a more musical-orientated side soon. And I remain optimistic and mostly delusional that Jihyo will one day get down on one knee and propose to me. This, though, is probably irrelevant to the review.
_______________________________________________________
Horrible jokes aside, thank you for reading this review whether in full or short. I appreciate it and hope, most importantly, that it sparks some deeper thinking about music or K-Pop for readers. That is why I write reviews; I do not write for the popularity and attention (after all, writing music reviews is a horrible way to get attention), but that I hope I can begin discussions and actively engage readers to being more critical to K-Pop.
For the next review, I have received an indirect request: IU’s “Palette.” I claim it is an indirect request as a dear friend is the one who personally asked me. Thus, in some ways, it still is a request even if not sent in officially via the blog. Nonetheless, that will be the next song we focus on. Afterwards, I have mixes of Critical Discussion posts and artists that have yet to be reviewed at all (though IU is interesting an artist I have yet to review) that we will cover. Until then, “[a]lways stay with me, don't leave me boy/girl”—because, quite obviously, I am a rather clingy boy. Jokes aside, look forward to IU’s “Palette.”  
0 notes
maxwellyjordan · 4 years
Text
Book review: In two new scholarly works, a shared critique of the court’s approach to church and state
Steven Green is the Fred H. Paulus professor of law and director of the Center for Religion, Law & Democracy at Willamette University College of Law.
It is rare for two books on the same subject – and written by some of the nation’s leading scholars – to be published not only in the same year but practically in the same month. But that’s what has happened with two new works on the First Amendment’s religion clauses. (Possibly, there is less of a coincidence than meets the eye in that they share the same publisher: Oxford University Press.) Howard Gillman, who co-wrote The Religion Clauses: The Case for Separating Church and State (which is due out Sept. 1), is the chancellor of the University of California, Irvine, a political scientist by training and the author of several well-received books about the Constitution and the Supreme Court. His co-author, Erwin Chemerinsky, is the dean of U.C. Berkeley law school and is a leading expert on constitutional law through his scholarship, advocacy and commentary. And Jack Rakove, who wrote the recently released Beyond Belief, Beyond Conscience: The Radical Significance of the Free Exercise of Religion, is a Pulitzer Prize-winning scholar of American intellectual history. When such outstanding scholars write new books about the history and meaning of the religion clauses, they deserve our attention.
The Religion Clauses and Beyond Belief, Beyond Conscience are very different books. The Religion Clauses is chiefly a critique of the current Supreme Court’s approach to adjudicating church-state conflicts, focusing primarily on case analysis and doctrine. Beyond Belief, Beyond Conscience is chiefly a historical analysis of the rise of the American concept of religious freedom. At the same time, the books share some common elements. The first is their timeliness, with both books filled with a sense of urgency. They both, in different ways, caution that the court’s interpretations of the religion clauses over the last decade or so have gone astray, historically and jurisprudentially. A second shared element is their critique – so well deserved – of the justices’ (mis)use of history in adjudicating church-state conflicts. And finally, both books conclude with pleas for rediscovering the positive values inherent in church-state separation.
Of the two books, The Religion Clauses assumes a more argumentative style, possibly reflecting Dean Chemerinsky’s legal expertise. At the risk of understating its comprehensive coverage, The Religion Clauses is in a sense a legal primer on the religion clauses. That is intended to be a compliment; one could read The Religion Clauses with little prior understanding about the concerns that motivated members of the founding generation, the modern development of church-state jurisprudence, or the competing perspectives that drive the debate today, and then afterwards, have a solid appreciation for those various issues.
Both the legally trained and lay reader should find The Religion Clauses approachable. It is well organized, with a lucid writing style and a careful and precise argumentation. It begins by discussing the recent transformation in church-state jurisprudence through decisions like Burwell v. Hobby Lobby Stores, Town of Greece v. Galloway, Trinity Lutheran Church v. Comer, Masterpiece Cakeshop v. Colorado Civil Rights Commission and American Legion v. American Humanist Association, an approach that has subordinated church-state separation to free exercise values. The book assigns responsibility for the jurisprudential shift to the conservative make-up of the court and to the growing rebuke of church-state separation by conservative jurists, politicians and members of the academy.
It then segues to a chapter on the historical context behind the adoption of the religion clauses. While the authors do not argue that a consensus existed among the Founders over the principle of church-state separationism, they do demonstrate the strength of the disestablishing impulse, the dynamic evolution in attitudes toward church-state relations, and the agreement on the idea of a secular identity for the new government. The authors argue strongly for a non-originalist approach to history, one that considers only the broad principles that have vitality and relevance today.
The next two chapters consider, in turn, the establishment clause and the free exercise clause. Here the book becomes more doctrinal, examining the leading court decisions for each clause through a balanced analysis, with each chapter then concluding with the authors’ perspective. The former chapter ends with a plea for a separationist approach, arguing that it promotes pluralism, inclusion and the independence of religion, as well as the integrity of secular government. A “separationist approach is consistent with advancing [a] more inclusive vision of religious liberty in our constitutional system,” Gillman and Chemerinsky write. The latter chapter concludes by advocating against granting exemptions from neutral laws and regulations for religiously motivated actors. Here, the book criticizes the Hobby Lobby line of decisions that have expanded free exercise protections at the expense of salutary policies intended to provide greater access and equality. The final chapter offers a final plea for efficacy of church-state separation, insisting that separationism does not promote hostility toward religion but, again, fosters equality and inclusion.
One critique of the book — or, at least, an observation about what is otherwise a balanced and convincing presentation — is in how it addresses the issue of the distinctiveness of religion. In the establishment clause chapter, the authors highlight religion’s distinctive quality in arguing against government financial aid or permitting government’s use of religious symbols and rhetoric. In the free exercise chapter, however, the authors downplay religion’s distinctiveness in their argument against providing religious exemptions to neutral regulations, insisting that exemptions “give[] undue favoritism to people with religious convictions over people with similarly strong secular convictions.” That tension can be reconciled, in part, by distinguishing between exemptions/accommodations that are relatively cost-free and chiefly enhance pluralism, and those that transfer costs to third parties and chiefly reinforce positions of privilege, a distinction that the authors acknowledge.
Beyond Belief, Beyond Conscience is a different book, structurally and thematically. Whereas The Religion Clauses reserves only one chapter to discussing the relevant history, with the remainder focusing on case analysis and argument, Beyond Belief, Beyond Conscience has the opposite organization. It is chiefly a history of the development of the idea of religious liberty, with the final chapter being reserved for comparing that history against the modern court’s application of that principle. The book’s subtitle – The Radical Significance of the Free Exercise of Religion – is somewhat misleading. While the primary focus of the study is to trace the evolution of conscience rights from a position of persecution, to toleration, and then to equality, Professor Rakove does not limit his coverage solely to free exercise matters but spends considerable space analyzing the disestablishment impulse as well. He sees the two impulses as interrelated and mutually reinforcing: “Disestablishment promoted free exercise, and free exercise made disestablishment superfluous.”
For a mid-sized book (186 pages), Beyond Belief, Beyond Conscience is quite comprehensive in its coverage. For much of the book, Rakove uses James Madison and Thomas Jefferson as our guides – an approach that will likely draw the ire of conservatives who argue that the attention (and credit) given to these two Founders is overdone. But Rakove demonstrates that that attention and credit is well deserved. The book traces the gradual acceptance of the ideas of religious toleration and free religious conscience, crediting both Enlightenment and pietistic traditions. Those impulses, combined with the ever-expanding religious pluralism of colonial British America, made religious equality necessary, although Rakove shows that it wasn’t necessarily inevitable. Madison and Jefferson then built on that foundation, with both men championing the idea of free exercise as the most visible manifestation of freedom of conscience and as a means – though not the only means – of countering the corruptions and inequities of religious establishments. Yet Rakove shows that Madison and Jefferson did not advocate for an unrestrained free exercise right or abandon their suspicions of religious authority and religious factionalism. For Madison, religious excesses were to be controlled through the marketplace of competition.
After a chapter on the 19th century — when Madisonian-Jeffersonian notions of religious equality and government disengagement were subordinated to a de facto Protestant “moral establishment” — Beyond Belief, Beyond Conscience transitions to a final chapter on modern free exercise development and current controversies. The first half of that chapter is relatively conventional; the second half is where Rakove uses his authority as a leading historian to critique the more recent shifts in free exercise jurisprudence. Like Gillman and Chemerinsky, Rakove questions the historical basis and normative efficacy of an emboldened, exemption-driven application of free exercise: “[T]he [modern] advocates of accommodation and exemption are not concerned with the issues that troubled our ancestors. It is not the freedom to believe or disbelieve or to disseminate religious truths that bothers them.” He also laments, although less prominently than in The Religion Clauses, the declining appreciation for church-state separation among members of the court and the academy. Rakove also rejects an originalist approach, and he has a subtle but clear message for those conservatives who use it in their quest to remake church-state jurisprudence: You have your history wrong – this is not Madison’s idea of free exercise.
Though different in their coverage and approach, The Religion Clauses and Beyond Belief, Beyond Conscience serve as complementary bookends to the current debate over church-state matters. They are worth having on anyone’s bookshelf, but only if they are read.
The post Book review: In two new scholarly works, a shared critique of the court’s approach to church and state appeared first on SCOTUSblog.
from Law https://www.scotusblog.com/2020/08/book-review-in-two-new-scholarly-works-a-shared-critique-of-the-courts-approach-to-church-and-state/ via http://www.rssmix.com/
0 notes