#confounding variables in statistics
Explore tagged Tumblr posts
Text
Tumblr media
Background: During adolescence, bullying often has a sexual content. Involvement in bullying as a bully, victim or both has been associated with a range of negative health outcomes. Transgender youth appear to face elevated rates of bullying in comparison to their mainstream peers. However, the involvement of transgender youth as perpetrators of bullying remains unclear in the recent literature. Objective: The aim of this study was to compare involvement in bullying between transgender and mainstream youth and among middle and late adolescents in a general population sample. Methods: Our study included 139,829 students in total, divided between a comprehensive school and an upper secondary education sample. Associations between gender identity and involvement in bullying were first studied using cross-tabulations with chi-square statistics. Logistic regression was used to study multivariate associations. Gender identity was used as the independent variable, with cisgender as the reference category. Subjection to and perpetration of bullying were entered each in turn as the dependent variable. Demographic factors, family characteristics, internalizing symptoms, externalizing behaviors, and involvement in bullying in the other role were added as confounding factors. Odds ratios (OR) with 95% confidence intervals (95% CI) are given. The limit for statistical significance was set at p < 0.001. Results: Both experiences of being bullied and perpetrating bullying were more commonly reported by transgender youth than by cisgender youth. Among transgender youth, all involvement in bullying was more commonly reported by non-binary youth than those identifying with the opposite sex. Logistic regression revealed that non-binary identity was most strongly associated with involvement in bullying, followed by opposite sex identity and cisgender identity. Transgender identities were also more strongly associated with perpetration of bullying than subjection to bullying.
--
Secondly, we found that transgender identity was generally associated with perpetrating bullying and that the association was stronger than that of transgender identity and being bullied. To the best of our knowledge, past research has not examined perpetration of bullying among gender minority youth, thus rendering comparisons to prior research impossible. In a study by Dank et al. (2014), however, it was reported that the few transgender young people in their study were the ones most likely to perpetrate dating violence among their sample.
--
Thirdly, non-binary identity was more strongly associated with involvement in bullying than opposite sex identity. Past research has found elevated rates of being subjected to bullying among youth (Lowry et al., 2020; van Beusekom et al., 2020) and transgender youth (Gower et al., 2018) who perceive themselves as more gender non-conforming (i.e., masculine females or feminine males) than youth with no such perception. Non-binary identifying youth particularly may display gender expression that does not conform to either masculine or feminine roles, and this may make them vulnerable to being bullied either due to simply being different from the mainstream, or as a result of heterosexist control. We found, however, that not only being bullied but also engaging in bullying was even more common among non-binary (perception of gender conforms to both or neither sex or it varies) than among opposite sex identifying youth.
==
"Non-binary" is just a new way to bully other people.
Trying to make people refer to you in stupid, language defying ways, like "schglie/schglem," at the threat of being called a bigot, when you're a completely average man or woman, is the modern day equivalent of making the kids in the playground bow and call you "the god of the jungle gym" at the threat of being beaten up, when you can't actually do that flip from hanging by your knees into a standing position.
When, unlike the playground, society has given you the power to actually carry out that threat, you're not "marginalized," you're the ruling class. (What's that "power plus" arithmetic again?)
Whenever you enter the room, people tense up. And you love it. It's the whole point. Gay and transsexual people just want to blend in, but "nonbinary" is about the opposite. Everybody pays attention to you and you steal the oxygen out of the room. You say it's about "being kind," but you and everyone else knows what you are, which means it's really about being in control, but from a false position of vulnerability.
So it's no wonder it attracts narcissists and bullies. Especially when it's an effort-free way of being celebrated for a month - and on and off the rest of the year for an endless parade of "awareness" days - for being completely unremarkable and average.
Not being a tired stereotype is not an achievement, least of all one that needs to be celebrated.
976 notes · View notes
covid-safer-hotties · 7 months ago
Text
This groundbreaking study finds that you are about 150% more likely to cause/take part in a traffic accident following a covid infection. This increase is across the board with no effect from vaccination or long covid status. The authors say this is likely due to neurological changes in anyone post covid infection. Mask up. Don't let a virus rewire your brain.
Abstract Objective This study evaluated the association between acute COVID-19 cases and the number of car crashes with varying COVID-19 vaccination rates, Long COVID rates, and COVID-19 mitigation strategies.
Background The ongoing SARS-CoV-2 pandemic has led to significant concern over long-term post-infection sequelae, especially in the Neurologic domain. Long COVID symptoms, including cognitive impairments, could potentially impact activities requiring high cognitive function, such as driving. Despite various potential impacts on driving skills and the general prevalence of Long COVID, the specific effects on driving capabilities remain understudied.
Design/Methods This study utilized a Poisson regression model to analyze data from 2020-2022, comparing aggregate car crash records and COVID-19 statistics. This model adjusted for population and included binary variables for specific months to account for stay-at-home orders. The correlation between acute COVID-19 cases and car crashes was investigated across seven states, considering vaccination rates and COVID-19 mitigation measures as potential confounders.
Results Findings indicate an association between acute COVID-19 rates and increased car crashes with an OR of 1.5 (1.23-1.26 95%CI). The analysis did not find a protective effect of vaccination against increased crash risks, contrary to previous assumptions. The OR of car crashes associated with COVID-19 was comparable to driving under the influence of alcohol at legal limits or driving with a seizure disorder.
Conclusions The study suggests that acute COVID-19, regardless of Long COVID status, is linked to an increased risk of car crashes presumably due to neurologic changes caused by SARS-CoV-2. These findings underscore the need for further research into the neuropsychological impacts of COVID-19. Further studies are recommended to explore the causality and mechanisms behind these findings and to evaluate the implications for public safety in other critical operational tasks. Finally, neurologists dealing with post-COVID patients, should remember that they may have an obligation to report medically impaired drivers.
92 notes · View notes
evidence-based-activism · 6 months ago
Note
Is it true more trans teens attempted suicide after ‘anti trans’ laws passed?
Hello! I am guessing this is in reference to this recent study [1], which made this exact claim. In short, this study has substantial issues that weaken its conclusion severely. These flaws, paired with stronger contradictory evidence, suggests that no, these laws are not connected to increased suicide among transgender teens.
First, the contradicting evidence:
While not in reference to these specific laws, other studies have attempted to determine if there is an increase in suicide following laws aimed at restricting transgender healthcare.
This independent report for the UK NHS [2] found "the data do not support the claim that there has been a large rise in suicide in young gender dysphoria patients" and "the claims that have been placed in the public domain do not meet basic standards for statistical evidence". (They – correctly – also criticize the way "this issue has been discussed on social media", which we will get into later.)
The Cass review [2] was an extensive literature review that found evidence did not support the belief that "hormone treatment reduces the elevated risk of death by suicide in [the transgender youth] population".
This Finnish study on adolescents and young adults [4] took a slightly different approach, attempting to determine if access to "gender reassignment" reduced suicide rates. First, they found "clinical gender dysphoria does not appear to be predictive of all-cause nor suicide mortality when psychiatric treatment history is accounted for." They also found "medical gender reassignment does not have an impact on suicide risk".
This similar Swedish study on adults [5] found that "sex reassignment, although alleviating gender dysphoria, may not suffice as treatment for transsexualism" as higher "mortality, suicidal behaviour, and psychiatric morbidity" persists after "sex reassignment".
This Dutch study on adults [6] also found that "the suicide risk in transgender people ... seems to occur during every stage of transitioning".
Importantly, these studies are all reasonably strong and well-designed. The centralized nature of these country's health data allowed for large-scale, representative studies to be performed. The English study [2] reviewed data directly from the NHS, the Cass review [3] was an extensive and through literature review (common myths about the report are debunked here [7]), and the final three studies [4-6] were all long-term (multiple decade), nation-wide cohort studies.
This evidence provides a strong indication that medical gender transition treatment do not reduce suicide risk in transgender populations.
---
So, what are the issues with the US study?
First, this study [1] was not representative. The participants were recruited from "non-probability cross-sectional online survey" via "targeted advertisements on social media" between "2018 and 2022".
Already there are substantial issues:
This study is explicitly noted to be non-representative, drawing conclusions about transgender youth based solely on this study is irresponsible.
The participants were recruited via social media, which past research [8] has shown to be correlated with suicide attempts. Notably, the causality of this relationship is irrelevant in this case, because it is still evidence of a significant confounding variable. (Confounding variables are the primary reason why you cannot determine causality from correlational relationships like the one presented in [1].)
This study overlaps with the COVID-19 pandemic. They claim to have "considered the potential impact of the ... pandemic" and found "minimal evidence" of any impact. This conclusion contradicts substantial amounts of past research including a systematic review and meta-analysis of youth suicide during the pandemic [9], a systematic review and meta-analysis of youth depression and anxiety [10], and an analysis of national suicide data for US youth [11].
Their analysis of the impact of the pandemic used "population-adjusted ... death counts by year as a proxy for the overall impact ... in a given state for a given time period". It's likely this was on of the only data sources they had to evaluate the impact of the pandemic, but that doesn't change the fact that this proxy neglects to consider the isolation, financial stress, and social media use among youth and their families that likely drove their decline in mental health. (See studies [12-15] for some introductory evidence of this.)
Other issues:
They considered only self-report of suicide attempts and serious suicidal thoughts, not actual measurements of suicides or suicide attempts. This adds an additional possibility of response bias. (Notably, this was not a concern for the studies I discussed above, which show no relationship between similar laws and suicide, as they evaluate actual medical report data.)
They found a statistically significant effect for suicide attempts but not for "seriously considering suicide". This is a strange finding, as, if we assume the results are accurate, it suggests that suicide attempts increased following the laws without any corresponding increase in suicidal thoughts. (This will be relevant later, so keep it in mind.)
The don't appear to report an actual incidence rate, which is a significant oversight considering the topic.
The substantial reduction in effect size when considering the total number of reported suicide attempts versus the number of people who reported at least one suicide attempt suggests that the attempts are clustered within a small group of participants.
They found no evidence of the introduction or enactment of additional "anti-transgender laws" had any impact of transgender youth suicide. Again, this is inconsistent with their overall conclusions; if it is the legal restrictions that cause an increase in suicide attempts then you would expect an increase in restrictions to also increase attempts. (Although not necessarily in a linear fashion.)
All in all, this study makes claims that are much stronger than the evidence they provide to support them. Other, stronger studies, contradict the assertion that these sorts of laws increase suicide among transgender youth. (Which is, I should stress, a good thing for the transgender community.)
(And I should also note here that my personal opinion on the morality of these laws is specific to each law/law category. I'm purely presenting the empirical research concerning the relationship between these laws and suicide in this post.)
---
The major confounding factor: social contagion of suicide.
It is, at this point, all but settled evidence that "suicidal behavior is 'contagious' in that it can be transmitted, directly or indirectly, from one person to another" [16]. This is extraordinarily important because it impacts the way we (i.e., society) should talk about and depict suicide. For an unrelated example, this review [17] concludes "portrayals of suicide in entertainment media may increase suicides and attempted suicide in the population". A specific example of this was the increase in suicide rates and admissions among youth, following the release of the "13 Reasons Why" television show [18-19].
And this is well-known by LGBT organizations. In 2017, the Movement Advancement Project (a pro-LGBT think tank), the Johnson Family Foundation (a pro-LGBT organization), and the American Foundation for Suicide Prevention co-authored the second edition of the report "Talking About Suicide & LGBT Populations" [20], which was also endorsed by GLSEN, GLAAD, The Trevor Project, the Transgender Law Center, and the Human Rights Campaign (among others).
Recommendations in this report include [emphasis mine]:
DO encourage discussion about suicide prevention strategies.
DO emphasize the vital importance of resilience
DON’T attribute a suicide death to a single factor (such as bullying or discrimination) or say that a specific anti-LGBT law or policy will “cause” suicide.
DON’T risk spreading false information by repeating unsubstantiated rumors or speculation about suicide deaths or why they occurred.
DON’T talk about suicide “epidemics” or suicide rates for LGBT people. (Remember that sexual orientation and gender identity are not recorded at the time of death, so we do not have data on suicide rates or deaths among LGBT people. In addition, presenting suicide as a trend or a widespread occurrence (for example, tallying suicide deaths that occur in proximity to an external event) can encourage vulnerable individuals to see themselves as part of a larger story, which may elevate their suicide risk.)
DON’T use social media or e-blasts to announce news of suicide deaths, speculate about reasons fora suicide death, focus on personal details about the person who died, or describe the means of death.
DON’T idealize those who have died by suicide or create an aura of celebrity around them.
(They also note that suicide rates cannot be determined by looking at suicide attempts, which is relevant to the above study.)
Please take a moment to consider the extent to which public officials, organizations, and figures are completely disregarding these recommendations in reference to the transgender community. Beyond the anecdotal evidence people involved with or aware of the community are exposed to, this English study [21], this Indian study [22], and this American study [23] all find "poor adherence to responsible reporting guidelines" for the transgender (or "LGBTQ+") community.
Further, as I indicated earlier, the English study [2] directly addresses this sort of irresponsible rhetoric. Failure to comply with responsible reporting guidelines is causing direct harm to the transgender community.
And how does this apply to the first study [1]? It adds in an additional confounding factor: the rhetoric surrounding these laws and their impact on suicide in the transgender community. It suggests that at least some of the observed suicide attempts were precipitated by the social contagion of suicide.
This theory would also explain some of the inconsistencies in the study. That is, it is possible that there was already a subsection of suicidal transgender youth, who, once exposed to increasingly this irresponsible social-contagion-inducing rhetoric, reported (more frequent) suicide attempts. This would account for:
The lack of increase in suicidal thoughts (i.e., those effected were already suicidal).
Why the effect size for the "at least one attempt" is so much lower than the "total attempts" effect size (i.e., the reported increase is concentrated within a small group of people)
Why there is no dose-response relationship between the number of laws and suicide attempts (i.e., it is not the laws so much as the rhetoric surrounding them that is having such a negative impact)
---
Conclusion
Ultimately, there is strong evidence from other countries indicating that similar "anti-trans" laws are not associated with suicide rates. The new American study suggesting otherwise is remarkably weak, and may actually be providing evidence for the social contagion theory of suicide.
Most importantly, the current rhetoric surrounding suicide in the trans community is incredibly dangerous and does a great disservice to all transgender people.
(And again, my personal opinion on the morality of these laws is specific to each law/law category. I'm purely presenting the empirical research concerning the relationship between these laws and suicide in this post.)
I hope this helps, Anon!
References under the cut:
Lee, W. Y., Hobbs, J. N., Hobaica, S., DeChants, J. P., Price, M. N., & Nath, R. (2024). State-level anti-transgender laws increase past-year suicide attempts among transgender and non-binary young people in the USA. Nature Human Behaviour, 1-11.
Appleby, Louis. “Review of Suicides and Gender Dysphoria at the Tavistock and Portman NHS Foundation Trust: Independent Report.” Department of Health & Social Care, 19 July 2024, https://www.gov.uk/government/publications/review-of-suicides-and-gender-dysphoria-at-the-tavistock-and-portman-nhs-foundation-trust/review-of-suicides-and-gender-dysphoria-at-the-tavistock-and-portman-nhs-foundation-trust-independent-report.
Cass, H. (2024). Independent review of gender identity services for children and young people.
Ruuska, S. M., Tuisku, K., Holttinen, T., & Kaltiala, R. (2024). All-cause and suicide mortalities among adolescents and young adults who contacted specialised gender identity services in Finland in 1996–2019: a register study. BMJ Ment Health, 27(1).
Dhejne, C., Lichtenstein, P., Boman, M., Johansson, A. L., Långström, N., & Landén, M. (2011). Long-term follow-up of transsexual persons undergoing sex reassignment surgery: cohort study in Sweden. PloS one, 6(2), e16885.
Wiepjes, C. M., den Heijer, M., Bremmer, M. A., Nota, N. M., de Blok, C. J., Coumou, B. J., & Steensma, T. D. (2020). Trends in suicide death risk in transgender people: results from the Amsterdam Cohort of Gender Dysphoria study (1972–2017). Acta Psychiatrica Scandinavica, 141(6), 486-491.
Final report – FAQs. (n.d.). The Cass Review. https://cass.independent-review.uk/home/publications/final-report/final-report-faqs/
Sedgwick, R., Epstein, S., Dutta, R., & Ougrin, D. (2019). Social media, internet use and suicide attempts in adolescents. Current opinion in psychiatry, 32(6), 534-541.
Madigan, S., Korczak, D. J., Vaillancourt, T., Racine, N., Hopkins, W. G., Pador, P., ... & Neville, R. D. (2023). Comparison of paediatric emergency department visits for attempted suicide, self-harm, and suicidal ideation before and during the COVID-19 pandemic: a systematic review and meta-analysis. The Lancet Psychiatry, 10(5), 342-351.
Madigan, S., Racine, N., Vaillancourt, T., Korczak, D. J., Hewitt, J. M., Pador, P., ... & Neville, R. D. (2023). Changes in depression and anxiety among children and adolescents from before to during the COVID-19 pandemic: a systematic review and meta-analysis. JAMA pediatrics.
Bridge, J. A., Ruch, D. A., Sheftall, A. H., Hahm, H. C., O’Keefe, V. M., Fontanella, C. A., ... & Horowitz, L. M. (2023). Youth suicide during the first year of the COVID-19 pandemic. Pediatrics, 151(3).
Felfe, C., Saurer, J., Schneider, P., Vornberger, J., Erhart, M., Kaman, A., & Ravens-Sieberer, U. (2023). The youth mental health crisis: Quasi-experimental evidence on the role of school closures. Science Advances, 9(33), eadh4030.
Jones SE, Ethier KA, Hertz M, et al. Mental Health, Suicidality, and Connectedness Among High School Students During the COVID-19 Pandemic — Adolescent Behaviors and Experiences Survey, United States, January–June 2021. MMWR Suppl 2022;71(Suppl-3):16–21. DOI: http://dx.doi.org/10.15585/mmwr.su7103a3.
Kalenkoski, C. M., & Pabilonia, S. W. (2024). Teen social interactions and well-being during the COVID-19 pandemic. Review of Economics of the Household, 1-48.
Adegboye, D., Williams, F., Collishaw, S., Shelton, K., Langley, K., Hobson, C., ... & van Goozen, S. (2021). Understanding why the COVID‐19 pandemic‐related lockdown increases mental health difficulties in vulnerable young children. JCPP advances, 1(1), e12005.
Forum on Global Violence Prevention; Board on Global Health; Institute of Medicine; National Research Council. Contagion of Violence: Workshop Summary. Washington (DC): National Academies Press (US); 2013 Feb 6. II.4, THE CONTAGION OF SUICIDAL BEHAVIOR. Available from: https://www.ncbi.nlm.nih.gov/books/NBK207262/
Niederkrotenthaler, T., Kirchner, S., Till, B., Sinyor, M., Tran, U. S., Pirkis, J., & Spittal, M. J. (2021). Systematic review and meta-analyses of suicidal outcomes following fictional portrayals of suicide and suicide attempt in entertainment media. EClinicalMedicine, 36.
Cooper Jr, M. T., Bard, D., Wallace, R., Gillaspy, S., & Deleon, S. (2018). Suicide attempt admissions from a single children's hospital before and after the introduction of Netflix series 13 Reasons Why. Journal of Adolescent Health, 63(6), 688-693.
Bridge, J. A., Greenhouse, J. B., Ruch, D., Stevens, J., Ackerman, J., Sheftall, A. H., ... & Campo, J. V. (2020). Association between the release of Netflix’s 13 Reasons Why and suicide rates in the United States: An interrupted time series analysis. Journal of the American Academy of Child & Adolescent Psychiatry, 59(2), 236-243.
Movement Advancement Project, Johnson Family Foundation, and American Foundation for Suicide Prevention. August 2017. "Talking About Suicide & LGBT Populations." 2nd edition. MAP's Talking About LGBT Issues Series. https://www.lgbtmap.org/talking-about-suicide-and-lgbt-populations
Bolzern, J. E., Mnyama, N. L., & McMillan, D. (2019). Responsible journalism, imitative suicide, and transgender populations: A systematic review of UK newspapers. Journal of gay & lesbian mental health, 23(3), 276-288.
Kar, S. K., Menon, V., Mukherjee, S., Bascarane, S., Sharma, G., Pattnaik, J. I., ... & Agarwal, V. (2023). Suicide reporting of LGBTQI+ population in India: An analysis of online media reports of the past decade. Journal of Public Health, 45(1), e95-e103.
Kirchner, S., Till, B., Laido, Z., & Niederkrotenthaler, T. (2024). Suicide-related media reporting with a focus on sexual and gender minority identities: A content analysis of the quality of reporting. Crisis: The Journal of Crisis Intervention and Suicide Prevention, 45(5), 339–347. https://doi.org/10.1027/0227-5910/a000956
20 notes · View notes
rad4learning · 8 days ago
Text
Causal inference with observational data
I am not someone who believes that you can never make causal claims from observational data. However, if you are going to do it you should do it properly. That means, you need to present your causal model - making sure to include in that model other key pathways that might connect to both you proposed cause and effect. Hopefully development of such a model has involved multiple domain experts. In this context, we would likely have different proposed causal models, different assumptions, depending on whether the authors adopted a gender-critical, trans-critical framework or trans-supporting framework. You then need to complete your statistical analysis based on the assumptions in your causal model.
I highly discourage people from taking what is a descriptive or predictive statement and then making causal inference from it without explicitly outlining their causal model assumptions and how the statistical analysis is compatible with those. It is very easy to step wrong here. Some people will read this and go "ah yes! I know! confounding variables!" which is true but failure to include variables is not the only mistake you can make here. You can also include variables you shouldn't, introducing collider bias and e.g. end up concluding that mothers smoking during pregnancy improves health outcomes (you can manage to "achieve" this result for either the mother or the baby by including the wrong variables) or that smoking is protective against COVID.
People are often less familiar with collider bias as a concept, here are some reading ideas targeted to researchers. If there's interest I can write up something more general-audience friendly.
Paper showing that preterm birth can act as a source of collider bias in the relationship between smoking and maternal hypertension: https://pmc.ncbi.nlm.nih.gov/articles/PMC10312115/
(I will also note here for anyone who reads it & doesn't already know, that Missing at Random (MAR) is different to Missing Completely at Random (MCAR) - the authors are using MICE because they believe that not everyone is equally likely to say "don't know" about income. The terminology around missingness in data can be unintuitive.)
Introduction to the use of DAGs for variable selection in causal inference: https://karger.com/pps/article/90/5/289/829019/More-Adjustment-Is-Not-Always-Better-How-Directed
1 note · View note
spacetimewithstuartgary · 6 months ago
Text
Tumblr media
Charting the Exceptional, Unexpected Heat of 2023 and 2024
In 2024, global temperatures for June through August were the hottest on record, narrowly topping the same period in 2023. The exceptional heat extended throughout other seasons, too, with global temperatures breaking records for 15 straight months from June 2023 until August 2024, according to scientists from NASA’s Goddard Institute for Space Studies (GISS).
Although this spell of record heat fits within a long-term warming trend driven by human activity—primarily greenhouse gas emissions—the intensity of the heat, which reached a crescendo in the last half of 2023, surprised leading climate scientists. In a commentary in Nature, Gavin Schmidt, the director of GISS, used words like “humbling” and “confounding” to explain just how far temperatures overshot expectations during that period.
The charts on this page show how much global temperatures in 2023 and 2024 diverged from expectations based on NASA’s temperature record. Roughly a year later, Schmidt and other climatologists are still trying to understand why.
“Warming in 2023 was head-and-shoulders above any other year, and 2024 will be as well,” Schmidt said. “I wish I knew why, but I don’t. We’re still in the process of assessing what happened and if we are seeing a shift in how the climate system operates.”
Setting Expectations
Earth’s air and ocean temperatures during a given year typically reflect a combination of long-term trends, such as those associated with climate change, and shorter-term influences, such as volcanic activity, solar activity, and the state of the ocean.
In late 2022, as he has done each year since 2016, Schmidt used a statistical model to project global temperatures for the coming year. La Niña—which cools sea surface temperatures in the tropical Pacific—was present for the first part of 2023 and should have taken the edge off global temperatures. Schmidt calculated that average 2023 global temperatures would reach about 1.22 degrees Celsius above the baseline, putting it in the top three or four warmest years, but that it would not be a record-breaking year. Scientists at the UK Met Office, Berkeley Earth, and Carbon Brief made similar assessments using a variety of methods.
Tumblr media
This chart shows Schmidt’s expectation for how much monthly temperatures from January 2023 to August 2024 would differ from NASA’s 1951-1980 baseline (also known as an anomaly). The expectation (represented as the dashed line in the chart) was based on an equation that calculates global average temperature based on the most recent 20-year rate of warming (about 0.25°C per decade) and NOAA’s sea surface temperature measurements from the tropical Pacific, accounting for a three-month delay for these temperatures to affect the global average. The shaded area shows the range of variability (plus or minus two standard deviations).
“More complex global climate models are helpful to predict long-term warming, but statistical models like these help us project year-to-year variability, which is often dominated by El Niño and La Niña events,” said Zeke Hausfather, a climate scientist at the University of California, Berkeley. Hausfather helps produce the Berkeley Earth global temperature record and also generates annual predictions of global temperature changes based on those data.
Schmidt’s statistical model—which successfully predicted the global average temperature every year since 2016—underestimated the exceptional heat in 2023, as did the methods used by Hausfather and other climatologists. Schmidt expected global temperature anomalies to peak in February or March 2024 as a lagged response to the additional warming from El Niño. Instead, the anomalous heat emerged well before El Niño had peaked. And the heat came with unexpected intensity—first in the North Atlantic Ocean and then virtually everywhere.
“In September, the record was broken by an absolutely astonishing 0.5 degrees Celsius,” Schmidt said. “That has not happened before in the GISS record.”
Tumblr media
The chart above shows how global temperatures calculated from January 2023 to August 2024 differed from NASA’s baseline (1951–1980). The previous record temperature anomalies for each month—set in 2016 and 2020—are indicated by the red dashed line. Starting in June 2023, temperatures exceeded previous records by 0.3 to 0.5°C every month. Although temperature anomalies in 2024 were closer to past anomalies, they continued to break records through August 2024. The global average temperature in September 2024 was 1.26°C above NASA’s baseline��lower than September 2023 but still 0.3°C above any September in the record prior to 2023.
To calculate Earth’s global average temperature changes, NASA scientists analyze data from tens of thousands of meteorological stations on land, plus thousands of instruments on ships and buoys on the ocean surface. The GISS team analyzes this information using methods that account for the varied spacing of temperature stations around the globe and for urban heating effects that could skew the calculations.
Investigating Possible Contributors
Since May 2024, Schmidt has been compiling research about possible contributors to the unexpected warmth, including changes in greenhouse gas emissions, incoming radiation from the Sun, airborne particles called aerosols, and cloud cover, as well as the impact of the 2022 Hunga Tonga-Hunga Ha’apai volcanic eruption. However, none of these factors provide what Schmidt and other scientists consider a convincing explanation for the unusual heat in 2023.
Atmospheric greenhouse gas levels have continued to rise, but Schmidt estimates that the extra load since 2022 only accounted for additional warming of about 0.02°C. The Sun was nearing peak activity in 2023, but its roughly 11-year cycle is well measured and not enough to explain the temperature surge either.
Major volcanic eruptions, such as El Chichón in 1982 and Pinatubo in 1991, have caused brief periods of global cooling in the past by lofting aerosols into the stratosphere. And research published in 2024 indicates the eruption in Tonga had a net cooling effect in 2022 and 2023. “If that’s the case, there’s even more warming in the system that needs to be explained,” Schmidt said.
Another possible contributor is reduced air pollution. A research team led by Tianle Yuan, an atmospheric research scientist at NASA’s Goddard Space Flight Center, has found a significant drop in aerosol pollution from shipping since 2020. The drop coincides with new international regulations on sulfur content in shipping fuels and with sporadic drops in shipping due to the coronavirus pandemic.
Sulfur aerosol emissions promote the formation of bright clouds that reflect incoming sunlight back to space and have a net cooling effect. Reducing this pollution has the opposite effect: clouds are less likely to form, which could warm the climate. Although scientists, including Yuan, generally agree that the drop in sulfur emissions likely caused a net warming in 2023, the scientific community continues to debate the precise size of the effect.
“All of these factors explain, perhaps, a tenth of a degree in warming,” Schmidt said. “Even after taking all plausible explanations into account, the divergence between expected and observed annual mean temperatures in 2023 remains near 0.2°C—roughly the gap between the previous and current annual record.”
Grappling With Uncertainty
Both Hausfather and Schmidt expressed concern that these unexpected temperature changes could signal a shift in how the climate system functions. It could also be some combination of climate variability and a change in the system, Schmidt said. “It doesn’t have to be an either-or.”
One of the biggest uncertainties in the climate system is how aerosols affect cloud formation, which in turn affects the amount of radiation reflected back to space. However, one challenge for scientists trying to piece together what happened in 2023 is the lack of updated global aerosol emissions data. “Reliable assessments of aerosol emissions depend on networks of mostly volunteer-driven efforts, and it could be a year or more before the full data from 2023 are available,” Schmidt said.
NASA’s PACE (Plankton, Aerosol, Cloud, ocean Ecosystem) satellite, which launched in February 2024, could help shed light on these uncertainties. The satellite will help scientists make a global assessment of the composition of various aerosol particles in the atmosphere. PACE data may also help scientists understand cloud properties and how aerosols influence cloud formation, which is essential to creating accurate climate models.
Schmidt and Hausfather invite scientists to discuss research related to the contributors of the 2023 heat at a session they are convening at the fall meeting of the American Geophysical Union in Washington, D.C., on December 9–13, 2024.
NASA Earth Observatory map and charts by Michala Garrison, based on data from the NASA Goddard Institute for Space Studies. Climate spiral visualization by Mark SubbaRao, NASA Goddard Space Flight Center/Scientific Visualization Studio. Story by Emily Cassidy.
2 notes · View notes
mass-convergence · 11 months ago
Text
Okay …. I kind of wonder with the prevalence of ADHD diagnoses in the past few years, if that statistic of 5% of people who have ADHD graduating college is a little low?
Doesn’t change the fact that universities very much do have issues with accommodating people with ADHD or really any disability.
However … after googling the hell out of it - I have yet to find the actual published study that says “5% of people diagnosed with ADHD graduate from college”
The ONLY source I have found is this news article https://wcfcourier.com/news/local/for-adhd-students-transition-to-college-is-tough/article_950df28a-ac05-5b50-8768-2ed34aa05376.html sorry it’s behind a paywall
And that doesn’t cite the study only says “according to a 2008 study in the Journal of Learning”
So one single study.
In 2008.
There have been some other studies that have found lower graduation rates of college students diagnosed with ADHD but not as extreme as 5% (okay those you’re going to have to look up yourself - I’m like tired).
The thing is, there are a lot more people being diagnosed with ADHD now - I got my diagnosis last year. I’m 30. I have a college degree and a job (this isn’t bragging - many other people I know who have ADHD are also working in my field and that requires at least a Bachelor’s if not a Master’s)
5% is a fucking grim number and I don’t think it’s actually representative of the true graduation rate. I can’t even find the stupid study that says that - only the news article - so I can’t really read what their methodology was. What was their sample size? How did they choose participants? How did they gauge educational outcomes? Were there maybe confounding variables? (Like race, socioeconomic status, etc). Didn’t see if they were looking at Associates vs Bachelors or anything.
In conclusion:
I think the 5% figure is most likely bullshit or at least comes from a single study that I cannot find that was done in 2008. That’s 16 years ago.
I think that there are still serious differences in educational outcomes between people diagnosed with ADHD vs neurotypicals. Colleges are indeed very shitty about accommodations and professors very much can just not give a fuck. (Source: I was in college … it took me almost a decade to get through undergrad)
Anecdotally (so very much not scientific) … in my current field of work where you need at least a Bachelor’s degree (if not a Master’s given how fucking competitive it is) - I have met a ton of people with ADHD. None of them came from like rich families that were willing to chuck money at the school to keep their baby in college. I’ve met people with ADHD in other scientific/highly technical fields that require at least a Bachelor’s for any job. There are more people than you think.
Some people may not follow the typical college track where you graduate high school and go to college right afterward like it’s 13th grade or something. In fact some people go do other shit for a while before saying “y’know I want to get a degree”
A lot of adults now are being diagnosed with ADHD as we gain a better understanding/diagnosis criteria for it. People are now self-diagnosing and also seeking a diagnosis now that, due to increased awareness of what ADHD actually fucking is, are like “ah fuck so I wasn’t a lazy idiot in my childhood like my parents and teachers told me I was?!”
If you are just starting out in life and have an ADHD diagnosis … please do not let the 5% statistic dissuade you from pursuing a college degree. It’s not hopeless. If you want to go to college then go to college.
Scientific studies can have glaring weaknesses. Which is why replication of results and peer review are super important. One study that says the graduation rate of college students with ADHD is only 5% does not make it true in the wider context. You need multiple studies.
4 notes · View notes
barry369 · 1 year ago
Text
Mastering Panel Data Analysis in STATA: A Comprehensive Guide
In the realm of statistical analysis, STATA stands out as a powerful tool for unraveling complex datasets and deriving meaningful insights. One area where STATA excels is in panel data analysis, a technique frequently employed in econometrics and social sciences to explore trends over time and across different entities. If you've ever found yourself pondering the request, "write my STATA homework," rest assured that this comprehensive guide will not only tackle a challenging question related to STATA but will also provide a detailed answer, showcasing the prowess of the xtreg command. We'll navigate the intricacies of estimating the impact of a policy change on GDP per capita, incorporating fixed effects, time effects, and a covariate named "Investment." Whether you're a student seeking homework assistance or a researcher eager to unlock the full potential of STATA, this guide is tailored for you. Let's embark on a journey to master panel data analysis in STATA together.
Understanding the Challenge The question at hand revolves around conducting a panel data analysis using STATA, with a dataset encompassing three key variables: "Country," "Year," and "GDP_Per_Capita." The task involves estimating the impact of a policy change on GDP per capita, considering fixed effects for each country, time effects, and controlling for the potential influence of the covariate "Investment."
Constructing the Regression Model To tackle this challenge, we turn to the versatile xtreg command in STATA. Let's break down the command and understand each component:
stata // Load your dataset use "your_dataset.dta", clear
// Specify the regression model with fixed effects for countries and time effects xtreg GDP_Per_Capita Investment i.Country##i.Year, fe Loading the Dataset: The use command loads the dataset into STATA, replacing any existing data. Replace "your_dataset.dta" with the actual name of your dataset.
Dependent Variable: GDP_Per_Capita is the variable we want to analyze, representing the outcome of interest.
Control Variable: Investment is included to control for its potential influence on the dependent variable.
Fixed Effects and Time Effects: The i.Country##i.Year part of the command includes fixed effects for both countries and time effects. The double hash (##) indicates the inclusion of interaction terms between countries and years.
Estimation Method: The fe option specifies fixed effects estimation.
Rationale Behind the Model Fixed Effects: Including fixed effects for countries helps control for unobserved heterogeneity at the country level. Fixed effects for years account for time-invariant factors that might affect the dependent variable.
Interaction Terms: The interaction terms between countries and years allow for capturing time-varying effects that may differ across countries. This is crucial when dealing with panel data, where entities (countries, in this case) evolve over time.
Control Variable: Including "Investment" as a control variable ensures that we account for its potential impact on the dependent variable, isolating the effect of the policy change.
Practical Implications This regression model provides a robust framework for assessing the impact of a policy change on GDP per capita while considering various factors. The inclusion of fixed effects and time effects enhances the model's ability to isolate the specific effects of interest and control for confounding variables.
Conclusion Mastering panel data analysis in STATA requires a combination of understanding the theoretical underpinnings and practical application of the software. By addressing a complex question related to STATA and providing a detailed answer, we've explored the nuances of constructing a regression model for panel data analysis.
Whether you're a student grappling with econometric assignments or a researcher seeking to extract valuable insights from your data, the xtreg command in STATA proves to be a valuable ally. So, the next time you find yourself thinking, "write my STATA homework," remember that STATA's capabilities extend far beyond the surface, empowering you to unravel the intricacies of your datasets and draw meaningful conclusions. Happy analyzing! #STATA #DataAnalysis #Econometrics #WriteMySTATAHomework
Tumblr media
12 notes · View notes
teamarcstechnologies · 2 years ago
Text
KEY FACTORS TO KEEP IN MIND WHEN INTERPRETING RESEARCH DATA
Tumblr media
Research Objectives: Understand the purpose and goals of the study. What is the research trying to uncover or prove?
Data Source: Evaluate the source of the data. Is it from a reliable and reputable source or dataset?
Sample Size and Selection: Assess whether the sample size is sufficient and if the sample selection process is unbiased and representative of the population of interest.
Data Collection Methods: Examine how the data was collected. Was it through surveys, experiments, observations, or other methods? The methodology can affect the validity of the findings.
Data Quality: Ensure that the data is accurate, complete, and free from errors or biases. Look for missing data or outliers.
Context: Consider the broader context in which the data was collected. External factors and trends may influence the results.
Statistical Significance: Determine whether the observed patterns or relationships in the data are statistically significant, indicating they are unlikely to occur by chance.
Correlation vs. Causation: Be cautious about inferring causation from correlation. Just because two variables are related does not mean one causes the other.
Bias and Confounding Variables: Be aware of potential biases in the data and the presence of confounding variables that can skew results.
Peer Review: Give preference to research that has undergone peer review, as it has been evaluated for quality and validity by experts in the field.
Time Frame: Consider the timeframe over which the data was collected, as trends and conditions can change over time.
Relevance: Assess the relevance of the research findings to the specific questions or problems you are trying to address.
Multiple Perspectives: Seek out multiple interpretations and viewpoints, especially when dealing with complex or controversial research.
Practical Implications: Consider how the research findings can be applied in practice or to inform decision-making.
Ethical Considerations: Ensure that the research was conducted ethically, with appropriate informed consent and adherence to ethical guidelines.
By keeping these factors in mind, you can approach the interpretation of research data with a critical and informed perspective.
To know more: https://www.teamarcs.com/blog/keep-in-mind-while-interpreting-data-of-any-research/
Also read: Online research project management platform
panel management platform
Online Project Management Platform
data analytics services
2 notes · View notes
concretechrysalis · 1 year ago
Text
At my 4 year old kid's wellness checkup today the doctor showed us his percentile scores in weight and height/stature. These two charts are normal metrics we've been given at every doctor visit since he was born, and are used to track developmental progress and screen for significant health risks and concerns, BUT this time she also gave us a third chart, which showed his BMI percentile ranking for other kids his age. His weight is in the 90th percentile while his height/stature is in the 60th, which is more or less on par with his measurements since the day he was born. His BMI, however, is in the 95th percentile, which means that he is officially obese, per statistical metrics. He's also an overall healthy kid (save some allergies) who's happy and healthy. BMI is an inaccurate and problematic statistic when it's applied to adults, as it was developed initially by measuring a small set of subjects who were pretty uniformly similar across confounding variables including age, gender, race, and socioeconomic standing. To apply it to young kids in absolutely an inappropriate extrapolation of the metric itelf, given that the data points are so tightly clustered within his age group. But suddenly the doctor is alarmed because of a plot map and is filtering all of her medical evaluation, advice, and care through a lens that his weight needs to be addressed. Also, for those of you keeping track at home, the weight difference between a BMI in the 94th percentile would drop him down to merely "overweight" rather than "obese" (which, of course, carries an entirely different set of cultural and medical associations) is a drop in weight of ½ of a pound. Two 8oz glasses of water weigh ½ lb. He averages about 32 to 40oz of water per day, which is between 2 and 2½ lbs in potential total weight shift throughout the day (depending on how often he goes pee), just from drinking water, and ignoring any other things he eats and drinks (like milk, which is super important for nutrients and is a complex carb that provides a nice, steady release of energy). The height difference between obese and overweight is about ¼ of an inch. That's the equivalent of slouching a little when they measure his height. What a mess of bad data to throw onto a preschooler who is quite healthy in every metric that can be tracked empirically. My point is that we can create a society with a healthy mindset about body weight and how it's not actually relevant to the accurate indication health by rejecting our broken social programming instead of passing that crappy mentality on to the next generation. Move that Overton Window back towards a healthy and rational understanding of the correlation between weight and health.
there's that post going around that's a short twit thread talking about "the obesity epidemic" as a result of economic oppression and everyone's snapping their little fingers for it but like. you guys know that we achieved socialist utopia tomorrow there would still be fat people right. you guys know that genetics play a bigger role in that than anything else and that some people will just be fat regardless of every other factor in their life right. you guys know that's fine right.
like idk I don't think it comes from a purposefully fatphobic place and like yes it sucks a lot that the demands of capitalism deny people a lot of opportunities to cook or learn to cook and be more engaged and intentional about their food. but it has this flavor of "poverty is bad because it makes people fat," which only holds up as an argument if you agree that being fat is a terrible thing that happens to people rather than being a completely neutral reality about some people's bodies.
anyone pulling any fatphobia on this post is getting blocked on sight I'm not playing.
12K notes · View notes
writingservice7 · 21 days ago
Text
0 notes
nursingwriter · 21 days ago
Text
Body Mass Index (BMI) is thought to be an indicator of overall health. Quite frequently, health researchers choose to include the measure as a possible predictor of specific outcomes of interest, such as death or incidence of a particular disease. Studies that have established an association between BMI and the incidence of cardio-vascular disease, for example, are quite common in the current literature; however, such studies do not always account for other possible contributors to cardio-vascular disease, such as smoking or other latent disease. In the study, Body-Mass Index and Mortality among 1.46 Million White Adults, researchers were interested in determining the existence of an optimal BMI level. The investigators pooled 19 prospective trials, which were initially designed to specifically address cancer related inquiries, in order to arrive at a better understanding of the association of all-cause mortality and BMI. In specific, the researchers' primary interest was, "to assess the optimal BMI range and to provide stable estimates of the risks associated with being overweight, obese, and morbidly obese (BMI ?40.0), with minimal confounding due to smoking or prevalent disease" (de Gonzalez, A.B., Hartge, P., Cerhan, J.R., et al., 2010, p. 2212). In general, prospective studies are those designed to observe a predefined population over time. In a prospective analysis, a particular outcome of interest, say death or the development of disease, is documented, and exposure or non-exposure to certain risk factors are ascertained as they occur during the course of a study (Gordis, L., 2004, p.152). Because the study design was "time-to-event" in nature and the variable "time" was itself a variable of interest, the authors of this study applied the use of a statistical model that belongs to the survival analysis branch of statistics called the proportional hazards model. Cox regression was used to estimate hazard ratios, which, in layman's terms, can be understood as a ratio of the probability of the event occurring in the exposed group vs. A non-exposed group. The results of the study found that the rate of death from any cause was generally lowest among subjects with a BMI of 22.5 to 24.9. As one might have been expected, smoking and diseases like cancer and heart disease lead to increasing rates of death at each extreme of the BMI scale. The results of the analyses were similar for men and women in the study and, therefore, were combined in order to give the conclusions of the study more statistical power. The age at which a subject's BMI was measured contributed greatly to the varied risk of mortality over each BMI strata. This was true even for those subjects who belonged to the "optimal" category of 20.0 to 24.9. The researchers found that, "for a BMI of 22.5 to 24.9, the hazard ratios were higher for participants whose height and weight were ascertained at 20 to 49 years of age than for those whose height and weight were ascertained after the age of 70 years (P = 0.005 for trend across categories of age)" (de Gonzalez, A.B., Hartge, P., Cerhan, J.R., et al., 2010, p. 2214). An optimal BMI was more indicative of a lifetime of good fitness over a lifetime when the subject's BMI was measured and fell within the optimal range at advanced age. In conclusion, the researchers concluded that the results of their study supported previous studies conducted to establish an optimal BMI. The study found that both overweight and obesity (and possibly underweight) categories were associated with increased all-cause mortality when the subject group was limited to those who never smoked and did not have diagnosed cancer or heart disease. Finally, it should be mentioned that this study included only non-Hispanic white subjects taking part in the pooled studies, because, as referenced by the authors, "the relationship between BMI and mortality may differ across racial and ethnic groups" (de Gonzalez, A.B., Hartge, P., Cerhan, J.R., et al., 2010, p. 2212). The investigators acknowledge that the results of the study are most applicable to white people who live in affluent countries. References de Gonzalez, A.B., Hartge, P., Cerhan, J.R., et al. (2010). Body-Mass index and mortality among 1.46 million white adults. N Engl J. Med; 363:2211-9. https://www.paperdue.com/customer/paper/population-statistics-121066#:~:text=Logout-,PopulationStatistics,-Length2pages Gordis, L. (2004). Epidemiology. (3rd ed.). USA: Elsevier Inc. Read the full article
0 notes
literaturereviewhelp · 24 days ago
Text
Describe the influence "levels of evidence" have on practice changes. Identify the most reliable level of evidence and provide an example of the type of practice change that could result from this level of evidence. Initial discussion question posts should be a minimum of 200 words and include at least two references cited using APA format.   OR   The paper will consist of 3-5 pages of content, a cover page and a reference page.  The total page count with the cover page and the reference page should be 5-7 pages. Your paper should include an introduction and conclusion that summarize the contents of the entire paper. Your paper should be written in proper APA format.  This link will take you to the section of the APUS library that can assist you with formatting: apus.campusguides.com/content.php Paper topic:       How Have We Evolved in the Management Field? Write: Based on your readings and research, develop a timeline of five key management theory/principle milestones over time.  Please address the following questions for each of the milestones you elect to include on your timeline: - Why was this milestone significant for the period of time it was created?  What was going on in the world of work that allowed the environment to know the time was right for this particular milestone? - Which theorist “fathered” the principle?  What were the signs of the time which led the theorist to develop the management concept? - What are the highlights and limitations of the theory/principle? References: At least two references are required for this assignment.  You may use your textbook as a reference in addition to the two external references.   OR Statistical significance refers to the likelihood that the results of a study are not due to chance, while clinical significance refers to the practical importance of the results in terms of their impact on patient care. In other words, statistical significance is a measure of the strength of the evidence, while clinical significance is a measure of the relevance of the evidence to real-world situations. Using a quantitative research article from one of the previous topics, analyze the p-value. What is it? Is it statistically significant? If your p-value is not statistically significant, what is the clinical significance? Generalizability of research depends on a variety of factors. List three factors of generalizability, and discuss whether this research article is generalizable to the nursing problem you are researching. Initial discussion question posts should be a minimum of 200 words and include at least two references cited using APA format. For previous articles, check the attachments.     The Role of Confounding Variables in Causal Inference In research, independent variables are the ones that are controlled or varied in order to establish their correlation to the dependent variable. The dependent variable is the variable that is being observed in order to determine the effects of the independent variables. On the other hand, there are extraneous variables that refer to any variable that is not of interest to the study but which has the capability of affecting the dependent variable. There are two common strategies used to address extraneous variables: randomization and matching. Randomization helps in the allocation of the participants to groups in a random manner to eliminate bias. Matching: the subjects are matched in such a way that other extraneous variables are grouped together to avoid confounding. Another type is statistical control, in which the researcher employs statistical means for controlling for extraneous variables, such as through regression analyses (Byrnes and Dee, 2025). For instance, in the paper by Egami and Tchetgen (2024), the authors employed negative control variables (auxiliary variables) to control unmeasured network confounding in causal peer effect estimation. By including outcome and exposure variables with no relation to the treatment of interest, they couldn’t be influenced by confounding and determine accurate causal peer effects with dependence in the network (Egami and Tchetgen, 2024).             Reference Egami, N., & Tchetgen Tchetgen, E. J. (2024). Identification and estimation of causal peer effects using double negative controls for unmeasured network confounding. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 86(2), 487-511. https://doi.org/10.48550/arXiv.2109.01933 Byrnes, J. E., & Dee, L. E. (2025). Causal inference with observational data and unobserved confounding variables.  Ecology Letters,  28(1), e70023. https://onlinelibrary.wiley.com/doi/abs/10.1111/ele.70023   Read the full article
0 notes
Text
Are Real-World Studies Reliable? Addressing Bias & Data Quality Issues
Tumblr media
In an era where healthcare decisions are increasingly driven by data, real-world evidence (RWE) has become a crucial tool for assessing treatment effectiveness beyond controlled medical trials. Real-world data (RWD) provides insights into how medical interventions perform across diverse patient populations in routine practice. However, concerns regarding bias, data integrity, and regulatory compliance raise an important question: How reliable are real-world studies?
The Growing Importance of RWE
Unlike traditional clinical trials, which follow strict protocols and eligibility criteria, real-world studies rely on data from electronic health records (EHRs), insurance claims, patient registries, and even wearable devices. This shift allows researchers, policymakers, and healthcare professionals to evaluate the long-term safety, cost-effectiveness, and impact of treatments in real-world settings.
Medical affairs teams use RWE to support health economics research, inform market access strategies, and guide regulatory decision-making. However, ensuring the credibility of findings requires a proactive approach to addressing biases and enhancing data quality.
Common Biases in Real-world Studies
Real-world studies are vulnerable to multiple forms of bias, which can compromise their reliability:
Selection bias: Since real-world studies do not employ randomized patient selection, certain demographic or clinical groups may be overrepresented or underrepresented, leading to skewed results.
Confounding variables: Unlike randomized controlled trials (RCTs), real-world studies often lack mechanisms to isolate variables, making it difficult to establish causality.
Reporting bias: Incomplete or inconsistent data entry in electronic health records and insurance claims databases can introduce errors that affect study conclusions.
Publication bias: Studies with favorable outcomes are more likely to be published, creating an incomplete picture of a treatment’s true effectiveness.
Mitigating Bias in RWE
Several methodologies can help mitigate bias in RWE studies:
Propensity score matching (PSM): This statistical technique matches patients with similar baseline characteristics to reduce confounding.
Inverse probability weighting (IPW): A weighting method that adjusts for imbalances in patient characteristics, improving comparability.
Sensitivity analyses: Conducting multiple analyses with different assumptions helps assess the robustness of findings.
Use of linked datasets: Combining multiple data sources (e.g., EHRs, registries, and claims data) can improve data completeness and reduce missingness-related biases1.
Ensuring Data Quality in Real-world Studies
Improving the reliability of RWE requires stringent methodologies and advanced analytical tools. Strategies to enhance data quality include:
Systematic literature reviews: Conducting thorough literature reviews ensures that RWE studies incorporate all relevant data, reducing the risk of biased conclusions2.
Artificial intelligence in healthcare: AI-driven analytics can identify patterns, clean datasets, and account for missing variables, leading to more reliable insights3.
Standardized data collection: Implementing structured reporting systems across healthcare institutions ensures greater consistency and completeness in real-world data4.
Regulatory compliance: Adhering to guidelines set by regulatory bodies such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) ensures that real-world studies meet rigorous scientific and ethical standards5.
The Role of Regulatory Compliance in RWE Reliability
To incorporate RWE into clinical decision-making, regulatory bodies have introduced stringent data governance frameworks. Ensuring compliance with Good Clinical Practice (GCP) and other regulations mitigates the risks associated with incomplete or biased data.
The FDA’s Real-World Evidence Frameworkestablishes standards for assessing RWD quality, study design, and applicability in regulatory submissions6.
The EMA emphasizes transparency and reproducibility in RWE submissions, ensuring that studies meet the highest scientific standards7.
For example, the FDA approved Palbociclib (Ibrance) for male breast cancer based on RWE from claims and EHR data rather than traditional clinical trials8. This case highlights how high-quality RWE can inform regulatory decisions when RCTs are impractical.
Future Outlook: Combining RWE with Clinical Trials
While RCTs remain the gold standard for evaluating treatment efficacy, RWE plays a complementary role by providing insights into long-term safety, patient adherence, and economic impact. Integrating real-world data with traditional research methodologies can create a more comprehensive understanding of healthcare interventions.
Advancements in AI-driven analytics, real-time data integration, and digital health monitoring are improving the accuracy of RWE studies. Organizations are increasingly leveraging these technologies to refine data accuracy and eliminate bias9. By embracing the best practices in systematic literature review, regulatory compliance, and data validation, real-world studies can offer valuable insights that drive evidence-based healthcare decisions.
The Path Forward
RWE is a powerful tool in modern healthcare, but its reliability depends on addressing biases and ensuring data integrity. Implementing standardized methodologies, leveraging artificial intelligence, and adhering to regulatory standards can help unlock the full potential of real-world studies and effectively disseminate findings across the healthcare ecosystem.
References
Schneeweiss S. Learning from big health care data. N Engl J Med. 2014;370(23):2161-3.
Wang SV, Pinheiro S, Hua W, et al. STaRT-RWE: structured template for planning and reporting on the implementation of real-world evidence studies. BMJ 2021;372:m4856.
Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1347-58.
FDA. Real-world evidence: what is it and what can it tell us? [Internet]. 2023 [cited Feb 27, 2025]. Available from: Real-World Evidence
European Medicines Agency. Real-world evidence in regulatory decision-making [Internet]. 2022 [cited Feb 27, 2025]. Available from: https://www.ema.europa.eu/en/human-regulatory/post-authorisation/real-world-evidence
US FDA. Framework for FDA’s real-world evidence program [Internet]. 2018 [cited Feb 27, 2025]. Available from: https://www.fda.gov/media/120060/download
European Medicines Agency. Guideline on good pharmacovigilance practices (GVP) [Internet]. 2021 [cited Feb 27, 2025]. Available from: https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-good-pharmacovigilance-practices_en.pdf
US FDA. FDA approves Ibrance for male breast cancer based on real-world evidence [Internet]. 2019 [cited Feb 27, 2025]. Available from: https://www.fda.gov/news-events/press-announcements/fda-approves-ibrance-male-breast-cancer-based-real-world-evidence
Corrigan-Curay J, Sacks L, Woodcock J. Real-world evidence and regulatory decision making: where are we now? Clin Pharmacol Ther. 2018;104(5):822-9.
Contact Us – [email protected]
Visit Us at – Turacoz Healthcare Solutions: Medical Communications Company
0 notes
sidslash918 · 3 months ago
Text
Test a Logistic Regression Model
We will be using the Gapminder dataset to study the relation between response variable Political_Stability (derived from polityscore) and explanatory variables incomeperperson and employrate.
Code
Libname data '/home/u47394085/Data Analysis and Interpretation';
Proc Import Datafile='/home/u47394085/Data Analysis and Interpretation/gapminder.csv' Out=data.Gapminder Dbms=csv replace; Run;
Proc Means Data=data.Gapminder NMISS; Run;
Proc Freq Data=data.Gapminder; Tables polityscore; Run;
Data Gapminder_Linear_Model; Set data.Gapminder; If polityscore=. OR employrate=. OR incomeperperson=. Then Delete; Run;
Data Logistic_Regression_Model; Set Gapminder_Linear_Model; If polityscore < 6 Then Political_Stability=0; Else Political_Stability=1; Run;
Proc Freq Data=Logistic_Regression_Model; Tables Political_Stability; Run;
Proc Logistic Data=Logistic_Regression_Model descending; Model Political_Stability=incomeperperson employrate / Selection=backward; Run;
Output:
Tumblr media
Tumblr media
Tumblr media
Interpretation
1) Findings: Associations Between Explanatory Variables and Response Variable
The logistic regression model was used to assess the relationship between incomeperperson (primary explanatory variable) and Political_Stability (response variable). The backward elimination method was used for variable selection. The final model retained incomeperperson while eliminating employrate, suggesting that employrate was not a significant predictor of political stability.
Statistical Results:
Income per Person (incomeperperson)
Odds Ratio: 1.000
95% Wald Confidence Interval: [1.000, 1.000]
p-value: 0.0009 (Significant)
Interpretation: A higher incomeperperson is associated with a slightly increased likelihood of political stability (Political_Stability = 1). However, the effect size is extremely small, with an odds ratio of 1.000, suggesting minimal practical significance.
Intercept:
Estimate: -0.0200
Standard Error: 0.2062
Wald Chi-Square: 0.9597
p-value: 0.3273 (Not significant)
Interpretation: The intercept alone is not a significant predictor of political stability.
Model Fit Statistics:
Likelihood Ratio Test: Chi-square = 17.0594, p < 0.0001
Score Test: Chi-square = 14.2708, p = 0.0002
Wald Test: Chi-square = 10.9792, p = 0.0009
Interpretation: These tests confirm that at least one predictor significantly contributes to the model.
Model Performance (Discrimination Ability):
Percent Concordant: 70.6%
Percent Discordant: 29.4%
c-Statistic: 0.706
Interpretation: The model has moderate predictive ability to differentiate between stable and unstable political conditions.
2) Hypothesis Evaluation
Hypothesis: The primary explanatory variable (incomeperperson) was hypothesized to be significantly associated with political stability (Political_Stability = 1), with the expectation that higher income levels would lead to greater political stability.
Result:
The p-value for incomeperperson (0.0009) is statistically significant, supporting the hypothesis that income per person is related to political stability.
However, the odds ratio of 1.000 indicates that while the association is statistically significant, the actual impact is negligible in practical terms.
Conclusion: The results support the hypothesis in statistical terms (since incomeperperson is significant), but not in practical terms because the effect size is minimal.
3) Confounding Assessment
To assess confounding, employrate was initially included in the model alongside incomeperperson. However, backward elimination removed employrate, indicating it did not provide additional explanatory power.
Evidence of Confounding:
If employrate were a confounder, removing it from the model would substantially change the odds ratio for incomeperperson. However, the odds ratio for incomeperperson remained at 1.000, suggesting little evidence of confounding.
The model fit statistics (AIC and SC) improved when employrate was removed, further supporting its lack of contribution.
Final Assessment of Confounding:
No strong evidence of confounding was found.
Incomeperperson remained significant after employrate was removed, and its odds ratio did not change significantly.
This suggests that incomeperperson independently explains the observed variation in political stability without strong confounding effects from employrate.
Final Conclusion:
Incomeperperson is a statistically significant predictor of political stability (p = 0.0009), but its effect size is negligible (odds ratio = 1.000).
The hypothesis that income per person is associated with political stability is statistically supported but not practically meaningful.
There is no strong evidence of confounding, as removing employrate did not meaningfully alter the results.
0 notes
774 · 4 months ago
Quote
First, of active commuters who travelled longer distances at baseline, those who switched to an inactive mode of travel reported worse symptoms at follow-up than participants who continued to travel actively. Though not statistically significant, this appears concordant with longitudinal research that reports a negative association between time spent driving and mental wellbeing (Martin et al., 2014), possibly resulting from negative experiential consequences of long car journeys. Second, over longer baseline distances, the consistent use of active commuting was associated with worse symptoms than stable inactive travel. Given that active commuting is more likely over shorter distances (Badland et al., 2008; Dalton et al., 2013; Ogilvie et al., 2008; Yang et al., 2015), and less likely in households of greater affluence (Goodman, 2013) and car ownership (Dalton et al., 2013; Ogilvie et al., 2008), some participants who commuted actively over longer distances may have done so out of necessity rather than choice. This hypothesis is indirectly supported by indications that employees of lower socio-economic position may travel further to work owing to the cost of living in more proximate areas (Goodman et al., 2012), suggesting a potential benefit to depressive symptomatology from housing and labour market policies that reduce the requirement for longer commutes. However, within UK Biobank, there was no apparent difference in baseline income or financial difficulty between transition categories (Appendices 2a and 2b), and no relationship between baseline commute distance and household income (asymptomatic: p = 0.434; symptomatic p = 0.366; results not shown). Although these interactions appear to conflict with previous prospective analyses (Mytton et al., 2016; Martin et al., 2014), which favour longer and more active commutes, discrepancies may reflect differences in study populations and the operationalisation of variables. Also of note is the finding that, relative to stable inactive commuters, the severity of depressive symptoms was greater at follow-up at longer distances among stable active commuters. Assuming that this difference in effect was not attributable to between-group confounding, such as markers of socio-economic status, this finding indicates the possibility of a tipping point or optimal threshold for attenuating the severity of depressive symptoms through active travel. The dose-response relationship between active commuting and depressive symptomatology represents an interesting avenue for future research.
Changes in the mode of travel to work and the severity of depressive symptoms: a longitudinal analysis of UK Biobank - ScienceDirect
0 notes
spacetimewithstuartgary · 5 months ago
Text
Tumblr media
Charting the Exceptional, Unexpected Heat of 2023 and 2024
In 2024, global temperatures for June through August were the hottest on record, narrowly topping the same period in 2023. The exceptional heat extended throughout other seasons, too, with global temperatures breaking records for 15 straight months from June 2023 until August 2024, according to scientists from NASA’s Goddard Institute for Space Studies (GISS).
Although this spell of record heat fits within a long-term warming trend driven by human activity—primarily greenhouse gas emissions—the intensity of the heat, which reached a crescendo in the last half of 2023, surprised leading climate scientists. In a commentary in Nature, Gavin Schmidt, the director of GISS, used words like “humbling” and “confounding” to explain just how far temperatures overshot expectations during that period.
The charts on this page show how much global temperatures in 2023 and 2024 diverged from expectations based on NASA’s temperature record. Roughly a year later, Schmidt and other climatologists are still trying to understand why.
“Warming in 2023 was head-and-shoulders above any other year, and 2024 will be as well,” Schmidt said. “I wish I knew why, but I don’t. We’re still in the process of assessing what happened and if we are seeing a shift in how the climate system operates.”
Setting Expectations
Earth’s air and ocean temperatures during a given year typically reflect a combination of long-term trends, such as those associated with climate change, and shorter-term influences, such as volcanic activity, solar activity, and the state of the ocean.
In late 2022, as he has done each year since 2016, Schmidt used a statistical model to project global temperatures for the coming year. La Niña—which cools sea surface temperatures in the tropical Pacific—was present for the first part of 2023 and should have taken the edge off global temperatures. Schmidt calculated that average 2023 global temperatures would reach about 1.22 degrees Celsius above the baseline, putting it in the top three or four warmest years, but that it would not be a record-breaking year. Scientists at the UK Met Office, Berkeley Earth, and Carbon Brief made similar assessments using a variety of methods.
Tumblr media
This chart shows Schmidt’s expectation for how much monthly temperatures from January 2023 to August 2024 would differ from NASA’s 1951-1980 baseline (also known as an anomaly). The expectation (represented as the dashed line in the chart) was based on an equation that calculates global average temperature based on the most recent 20-year rate of warming (about 0.25°C per decade) and NOAA’s sea surface temperature measurements from the tropical Pacific, accounting for a three-month delay for these temperatures to affect the global average. The shaded area shows the range of variability (plus or minus two standard deviations).
“More complex global climate models are helpful to predict long-term warming, but statistical models like these help us project year-to-year variability, which is often dominated by El Niño and La Niña events,” said Zeke Hausfather, a climate scientist at the University of California, Berkeley. Hausfather helps produce the Berkeley Earth global temperature record and also generates annual predictions of global temperature changes based on those data.
Schmidt’s statistical model—which successfully predicted the global average temperature every year since 2016—underestimated the exceptional heat in 2023, as did the methods used by Hausfather and other climatologists. Schmidt expected global temperature anomalies to peak in February or March 2024 as a lagged response to the additional warming from El Niño. Instead, the anomalous heat emerged well before El Niño had peaked. And the heat came with unexpected intensity—first in the North Atlantic Ocean and then virtually everywhere.
“In September, the record was broken by an absolutely astonishing 0.5 degrees Celsius,” Schmidt said. “That has not happened before in the GISS record.”
Tumblr media
The chart above shows how global temperatures calculated from January 2023 to August 2024 differed from NASA’s baseline (1951–1980). The previous record temperature anomalies for each month—set in 2016 and 2020—are indicated by the red dashed line. Starting in June 2023, temperatures exceeded previous records by 0.3 to 0.5°C every month. Although temperature anomalies in 2024 were closer to past anomalies, they continued to break records through August 2024. The global average temperature in September 2024 was 1.26°C above NASA’s baseline—lower than September 2023 but still 0.3°C above any September in the record prior to 2023.
To calculate Earth’s global average temperature changes, NASA scientists analyze data from tens of thousands of meteorological stations on land, plus thousands of instruments on ships and buoys on the ocean surface. The GISS team analyzes this information using methods that account for the varied spacing of temperature stations around the globe and for urban heating effects that could skew the calculations.
Investigating Possible Contributors
Since May 2024, Schmidt has been compiling research about possible contributors to the unexpected warmth, including changes in greenhouse gas emissions, incoming radiation from the Sun, airborne particles called aerosols, and cloud cover, as well as the impact of the 2022 Hunga Tonga-Hunga Ha’apai volcanic eruption. However, none of these factors provide what Schmidt and other scientists consider a convincing explanation for the unusual heat in 2023.
Atmospheric greenhouse gas levels have continued to rise, but Schmidt estimates that the extra load since 2022 only accounted for additional warming of about 0.02°C. The Sun was nearing peak activity in 2023, but its roughly 11-year cycle is well measured and not enough to explain the temperature surge either.
Major volcanic eruptions, such as El Chichón in 1982 and Pinatubo in 1991, have caused brief periods of global cooling in the past by lofting aerosols into the stratosphere. And research published in 2024 indicates the eruption in Tonga had a net cooling effect in 2022 and 2023. “If that’s the case, there’s even more warming in the system that needs to be explained,” Schmidt said.
Another possible contributor is reduced air pollution. A research team led by Tianle Yuan, an atmospheric research scientist at NASA’s Goddard Space Flight Center, has found a significant drop in aerosol pollution from shipping since 2020. The drop coincides with new international regulations on sulfur content in shipping fuels and with sporadic drops in shipping due to the coronavirus pandemic.
Sulfur aerosol emissions promote the formation of bright clouds that reflect incoming sunlight back to space and have a net cooling effect. Reducing this pollution has the opposite effect: clouds are less likely to form, which could warm the climate. Although scientists, including Yuan, generally agree that the drop in sulfur emissions likely caused a net warming in 2023, the scientific community continues to debate the precise size of the effect.
“All of these factors explain, perhaps, a tenth of a degree in warming,” Schmidt said. “Even after taking all plausible explanations into account, the divergence between expected and observed annual mean temperatures in 2023 remains near 0.2°C—roughly the gap between the previous and current annual record.”
Grappling With Uncertainty
Both Hausfather and Schmidt expressed concern that these unexpected temperature changes could signal a shift in how the climate system functions. It could also be some combination of climate variability and a change in the system, Schmidt said. “It doesn’t have to be an either-or.”
One of the biggest uncertainties in the climate system is how aerosols affect cloud formation, which in turn affects the amount of radiation reflected back to space. However, one challenge for scientists trying to piece together what happened in 2023 is the lack of updated global aerosol emissions data. “Reliable assessments of aerosol emissions depend on networks of mostly volunteer-driven efforts, and it could be a year or more before the full data from 2023 are available,” Schmidt said.
NASA’s PACE (Plankton, Aerosol, Cloud, ocean Ecosystem) satellite, which launched in February 2024, could help shed light on these uncertainties. The satellite will help scientists make a global assessment of the composition of various aerosol particles in the atmosphere. PACE data may also help scientists understand cloud properties and how aerosols influence cloud formation, which is essential to creating accurate climate models.
Schmidt and Hausfather invite scientists to discuss research related to the contributors of the 2023 heat at a session they are convening at the fall meeting of the American Geophysical Union in Washington, D.C., on December 9–13, 2024.
NASA Earth Observatory map and charts by Michala Garrison, based on data from the NASA Goddard Institute for Space Studies. Climate spiral visualization by Mark SubbaRao, NASA Goddard Space Flight Center/Scientific Visualization Studio. Story by Emily Cassidy.
0 notes