#probabilistic genotyping
Explore tagged Tumblr posts
Text
I didn't want to distract from the excellent article about the woman doing great work on Wikipedia nazi articles, but it reminded me of my current crusade that I need Wikipedia gurus to help me fix.
Probabilistic genotyping is a highly contentious form of forensic science. They use an algorithm to say if someone's DNA was on the scene of a crime for mixtures that are too complex to be analyzed by hand.
Let's learn more by going to the Wikipedia page.

Oh that's good, it's less subjective. Sure defense attorneys question it, but they question all sorts of science.
Let's go to the cited article to learn more.

Well that doesn't seem like it probably supports the Wikipedia assertion. Let's skim through to the conclusion section of the article.

Well shit.
Also the article talks about how STRmix, one of these popular programs, has allowed defense attorneys to look at the algorithm. That's true! We hired an expert who was allowed to look at millions of lines of code! For 4 hours. With a pen and paper.
Junk science is all over the place in courtrooms and it's insane that allegedly objective places like Wikipedia endorse it so blindly. I am biting.
#wikipedia#forensic science#probabilistic genotyping#criminal justice#people who are good at wikipedia i need your help#criminal law
2 notes
·
View notes
Text
Allow me to clarify:
Single source DNA is real.
Toxicology and drug testing are real.
Probabilistic genotyping accurately suggests sources are present but we have no way of knowing if the likelihood ratios are correct because the software has not been (and cannot be fully) independently verified and validated.
Firearm toolmark examiners disagree with themselves 60% of the time when they view the same casings or bullets a second time.
Fingerprints are unique but fingerprint analysis does not accurately identify fingerprints to the level suggested by analysts, being wrong roughly 1 in 20 times.
Facial recognition has huge racial biases and is not accurate.
Polygraphs, hair analysis, bite marks, and profiling are all comedically discredited. For some ungodly reason the odontology people are trying to bring bitemark back.
Bloodspatter evidence vastly overstates the amount of accurate information it can provide.
I was just at a conference where someone was developing the technology to "identify" people using nail beds and knuckle prints. When we asked them to show us the studies that indicate there is any level of uniqueness/semi-uniqueness to fingernail beds or knuckle prints they could not do so.
Bad forensic science contributed to over half of wrongful convictions overturned by the innocence project. While some of it has probative value, the level of accuracy indicated on TV (and stated in the courtrooms) is overstated to an irresponsible degree. People believe scientific testimony over video evidence and confessions. Real people are being falsely convicted because people saw wild shit on CSI, and prosecutors/forensic scientists count on that to obtain convictions.
Things that work in fiction but not real life
torture getting reliable information out of people
knocking someone out to harmlessly incapacitate them for like an hour
jumping into water from staggering heights and surviving the fall completely intact
calling the police to deescalate a situation
rafting your way off a desert island
correctly profiling total strangers based on vibes
effectively operating every computer by typing and nothing else
ripping an IV out of your arm without consequences
heterosexual cowboy
159K notes
·
View notes
Text
Fwd: Course: Online.PopGenLowCoverageWGS.Oct21-24
Begin forwarded message: > From: [email protected] > Subject: Course: Online.PopGenLowCoverageWGS.Oct21-24 > Date: 27 May 2024 at 05:12:07 BST > To: [email protected] > > > > Dear all, > We are excited to announce that registrations are now open for > our upcoming online course (5th edition) "Population Genomic Inference > from Low-Coverage Whole-Genome Sequencing Data" taking place from > October 21-24. > > Course website: ( > https://ift.tt/B4QF1H9 ) > Course Overview: This course offers a cost-effective approach to > survey genome-wide variation at a population scale using low-coverage > sequencing. Participants will learn how to navigate the challenges of > high genotyping uncertainty through probabilistic frameworks, essential > for accurate population genomic inference. Key topics include: Workflows > centered around genotype likelihoods for whole-genome and reduced > representation studies. > > Methods and algorithms in the ANGSD software package and related programs. > > Best-practice guidelines for processing and analyzing low-coverage > sequencing data. > > Target Audience: This course is ideal for researchers with experience in > next-generation sequencing (NGS) (e.g., exome, RAD, pooled sequencing) > who are interested in low-coverage whole-genome sequencing. It is also > suitable for those seeking an introduction to the ANGSD software and > its probabilistic framework. > > Prerequisites: Participants should have a basic background in population > genomics and familiarity with NGS data. Knowledge of UNIX-based command > line and R is advantageous. Participants without prior experience in Unix > and R should complete suggested tutorials beforehand, as the course will > not cover these environments in detail. > > Course Outcomes: By the end of the course, participants will: Understand > the use of whole-genome sequencing for population genomics. > > Recognize the challenges and statistical frameworks of low-coverage > sequencing data. > > Be able to build bioinformatic pipelines for various population genomic > analyses using ANGSD/ngsTools/Atlas. > > Teaching Format: The course includes interactive lectures, small > exercises, and longer independent practical sessions each day. Data for > exercises will be provided. > > Best regards, Carlo > > > Carlo Pecoraro, Ph.D > Physalia-courses DIRECTOR > [email protected] > mobile: +49 17645230846 > > > > "[email protected]"
0 notes
Link
The approach, described in the American Journal of Human Genetics, improved classification accuracy by 6 percent. #BioTech #science
0 notes
Text
Shining a Light on Black Box Technology Used to Send People to Jail: 2021 Year in Review
Shining a Light on Black Box Technology Used to Send People to Jail: 2021 Year in Review
If you’re accused of a crime based on an algorithm’s analysis of the evidence, you should have a right to refute the assumptions, methods, and programming of that algorithm. Building on previous wins, EFF and its allies turned the tide this year on the use of these secret programs in criminal prosecutions. One of the most common forms of forensic programs is probabilistic genotyping software. It…
View On WordPress
0 notes
Text
All You Need to Know about GIAC Certified Forensic Analyst (GCFA)
GIAC Certified Forensic Analyst (GCFA) Certification Exam Credential found such algorithms enhance forensic analysis by using enhancing the velocity and objectivity of investigations, however their account is restricted through the human absurdity and cognitive bias brought by way of analysts.
The tech assessment comes two months afterwards GAO released a record describing how the argumentative algorithms acclimated with the aid of federal legislation administration work and one anniversary afterwards the babysitter warned greater than a dozen groups the usage of facial focus, one of three basic forensic algorithms, couldn’t yarn for which techniques they expend — expanding public distrust of the know-how.
“Policymakers may help the construction and accomplishing of specifications and guidelines related to legislation enforcement testing, procurement, and utilize to enrich bendability and cut back the possibility of misuse,” reads the evaluation. “This could aid tackle the challenges we recognized related to human involvement, accessible confidence, and interpreting and speaking results.”
while each the national institute of standards and know-how and the company of accurate area Committees for forensic Science OSAC, are already constructing specifications for argumentative algorithms, a new federal forensic oversight physique may well be so as. The other alternative is assigning a enhanced role to NIST and other corporations, in accordance with GAO.
The three fundamental forensic algorithms are: abeyant print, facial consciousness and probabilistic genotyping.
each abeyant book and facial awareness chase larger databases sooner and extra at all times than analysts, but poor best prints in the reduction of the accurateness of the former and animal captivation introduces errors with the closing. groups additional struggle to examine and annex probably the most correct facial recognition algorithms and discover ones with basal performance alterations across demographic agencies.
meanwhile probabilistic genotyping helps analysts evaluate a wider diversity of DNA facts, that may additionally have multiple contributors or be in part base, and examine it with samples from humans of interest. however evaluating such algorithms’ performance is complex, and there aren t any necessities for decoding or speaking the effects.
establishing standards across the appropriate exercise of algorithms will cut back wicked employ if statistics fine is addressed and enrich confidence of their consume by using imposing consistency throughout law administration groups, in addition to streamlining trying out and performance in the case of facial focus, in accordance with GAO.
The problem should be imposing requisites throughout all tiers of govt as a result of agencies and localities may additionally now not wish to confirm. The cost of deciding to buy and conserving algorithms could additionally upward thrust, and gaining knowledge of and testing necessities is already resource-accelerated, in line with GAO.
GAO also cautioned groups and congress agree with expanding algorithm practicing for analysts and investigators, which might cut back human absurdity and enrich cognitive bias. A acceptance technique may in the reduction of horrible exercise at federal and non-federal labs.
The challenge could be developing and distributing practising materials and picking what organizations are in can charge of a acceptance technique, in keeping with GAO.
ultimately GAO recommended increasing accuracy to improve believe by using providing greater assistance on algorithm testing consequences, statistics sources, utilize and investigations. more suitable allusive outcomes might alike help different businesses choose superior algorithms.
GAO foresees builders doubtlessly afraid unlock of proprietary algorithm suggestions, and the administration of records sources may create privacy hazards.
The watchdog agency fabricated it clear that nothing recommended in its evaluation constituted a proper suggestion, and no prison changes were proposed in the record to the condo Science board management and Rep. Mark Takano, D-Calif.
0 notes
Text
single-gene mendelian extensions
variations on dominance
non-complete dominance hierarchies tend to reflect genotypic ratios in phenotypic frequencies
1. complete dominance: heterozygote displays dominant phenotype
3:1 monohybrid phenotypic ratio
biochemical explanation: one copy of dominant allele encodes sufficient functional protein to express full dominant phenotype
2. incomplete dominance: heterozygote displays mix of two phenotypes
1:2:1 monohybrid phenotypic ratio
biochemical explanation: one copy of dominant allele encodes insufficient functional protein to express full dominant phenotype, resulting in heterozygote “dominant-lite” or “mix”
3. co-dominance: heterozygote displays both phenotypes simultaneously
1:2:1 monohybrid phenotypic ratio
biochemical explanation: each allele encodes a functional protein; thus phenotypes co-exist in one individual
allelic inheritance still corresponds to Mendelian tenets of segregation and independent assortment
post-reproduction protein expression in offspring differs according to type of dominance
multiple gene alleles and dominance series
a single gene may have more than two alleles within the relevant population
dominance is determined between only two alleles at a time, via phenotypic appearance of heterozygote (i.e., A > a from Aa individual’s display of phenotype A over phenotype a)
dominance hierarchy: collection of all allelic dominance relationships for a gene, ordered from most to least dominant allele
multiple allelic phenotypes arise from gene mutations, or spontaneous alterations of gene DNA
describing single-gene inheritance within a population
allelic frequency - percentage of population that possesses an allele
wild-type - common allele within population (WT, +)
mutant - rare allele within population (lowercase, - or no superscript)
monomorphic gene - only one WT allele in population
polymorphic gene - multiple WT, or common variant, alleles in population
pleiotropic lethality impacts gene inheritance probabilities
a pleiotropic gene affects phenotypes of multiple traits
i.e., mouse coat color gene encodes a protein that functions in both non-vital coat color and vital survival-related molecular pathways
gene’s dominance level differs depending on trait/pathway
EX: agouti allele is dominant for yellow coat color and recessive for lethality - one A(y)A allele copy results in yellow coloring and A(y)A(y) genotype results in mouse death after birth
pleiotropic recessive lethality alters Punnett square and probabilistic ratios for offspring of A(y) parental generation
expressivity and penetrance
given genotype does not always yield the corresponding phenotype in full, or even sometimes at all
1. expressivity: degree to which a phenotype appears in an individual
how intensely impacted the affected individual is, given the presence of corresponding genotype
i.e., “Individual has a more or less malign genetically inherited cancer”
expressivity is analyzed on a gradient of degrees, from high to low expressivity
2. penetrance: probability that a phenotype ever even appears in an individual
percent chance that, given the presence of corresponding genotype in the genome, an individual shows any trace of the phenotype in his or her lifetime
i.e., “Huntington’s is 50% or 1/2 penetrant in the human population”
penetrance is analyzed percentage-wise, on a population level, and the resulting penetrance % is applied to individual phenotypic probabilities
variation in phenotypic penetrance and expressivity on an individual level is dependent on the following:
interactions with other genes differences in an individual’s gene expression environment - temperature, chemical, pharmacological sensitivity
case study: sickle-cell allele exhibits all single-gene mendelian extensions
1. monomorphic WT phenotype and 400+ mutant alleles in population
2. pleiotropic roles in dominant malarial resistance and homozygous recessive anemic phenotypes
3. recessive lethal anemia in individuals with homozygous sickle-cell genotype
4. allelic dominance levels dependent on which trait is being analyzed
molecular level - allele A > S
cellular + environmental level - allele A < S
organismal level - allele A = S
0 notes
Text
Fwd: Postdoc: OxfordU.HostPathogenGenomics
Begin forwarded message: > From: [email protected] > Subject: Postdoc: OxfordU.HostPathogenGenomics > Date: 27 July 2023 at 06:55:02 BST > To: [email protected] > > > We are seeking to hire a post-doc to investigate paired host and pathogen > genomics at University of Oxford. > > Project description: The aim of the project is to use paired host-pathogen > genomics to understand why patients respond differently to infections. We > are sequencing host and virus genomes from large patient cohorts > infected with HCV, HBV and SARS-CoV-2. These cohorts are very well > characterised and many clinical phenotypes and biomarkers are measured > on all individual. The aims of this study are (1) to identify host > polymorphisms that drive evolution of the virus, (2) identify host > and virus genetic polymorphisms that drive differences in clinical > phenotypes and measured biomarkers independent of each other and (3) > detect interactions between host and virus genetics that drive the > differences in clinical phenotypes and measured biomarkers. > > You will be educated to PhD level or be close to completion with a > quantitative component, particularly population genetics, bioinformatics, > computational biology, statistics or probabilistic machine learning > and computer science. This, together with some relevant experience of > working with large genotyping or sequencing data sets and contributing > to scientific papers as evidenced by publications are essential for > this role. > > You will have the ability to work both independently and as part of a team > to manage the day-to-day running of a research project. It is essential > that you demonstrate that you have the ability to independently plan and > manage a research project, including a research budget. An understanding > of the genetics of infectious disease is desirable. > > Instructions for the application: The application has to be made > through the University of Oxford portal. The link is provided below. > Application deadline: 8 August 2023, if position is not filled we will > re-post the position. > > Type of employment: Fixed term > > Link for the advert: > https://ift.tt/hOeBQNw > > > For further information about the position please contact: Dr. Azim > Ansari, [email protected] > > > Azim Ansari
0 notes
Text
The Creepy Genetics Behind the Golden State Killer Case
For the dozen years between 1974 and 1986, he rained down terror across the state of California. He went by many names: the East Side Rapist, the Visalia Ransacker, the Original Night Stalker, the Golden State Killer. And on Wednesday, law enforcement officials announced they think they finally have his real name: Joseph James DeAngelo. Police arrested the 72-year-old Tuesday; he’s accused of committing more than 50 rapes and 12 murders.
In the end, it wasn’t stakeouts or fingerprints or cell phone records that got him. It was a genealogy website.
FBI
Lead investigator Paul Holes, a retired Contra Costa County District Attorney inspector, told the Mercury News late Thursday night that his team used GEDmatch, a no-frills Florida-based website that pools raw genetic profiles shared publicly by their owners, to find the man believed to be one of California’s most notorious criminals. A spokeswoman for the Sacramento County District Attorney’s Office reached Friday morning would not comment or confirm the report.
GEDmatch—a reference to the data file format GEDCOM, developed by the Mormon church to share genealogical information—caters to curious folks searching for missing relatives or filling in family trees. The mostly volunteer-run platform exists “to provide DNA and genealogy tools for comparison and research services,” the site’s policy page states. Most of its tools for tracking down matches are free; users just have to register and upload copies of their raw DNA files exported from genetic testing services like 23andMe and Ancestry. These two companies don’t allow law enforcement to access their customer databases unless they get a court order. Neither 23andMe nor Ancestry was approached by investigators in this case, according to spokespeople for the companies.
But no court order would be needed to mine GEDmatch’s open-source database of more than 650,000 genetically connected profiles. Using sequence data somehow wrung from old crime scene samples, police could create a genetic profile for their suspect and and upload it to the free site. As the Sacramento Bee first reported, that gave them a pool of relatives who all shared some of that incriminating genetic material. Then they could use other clues—like age and sex and place of residence—to rule out suspects. Eventually the search narrowed down to just DeAngelo. To confirm their suspicions, police staked out his Citrus Heights home and obtained his DNA from something he discarded, then ran it against multiple crime scene samples. They were a match.
“It’s fitting that today is National DNA Day,” said Anne Marie Schubert, the Sacramento district attorney, at a press conference announcing the arrest Wednesday afternoon. A champion of genetic forensics, Schubert convened a task force two years ago to re-energize the cold case with DNA technology. “We found the needle in the haystack, and it was right here in Sacramento.”
After four decades of failure, no one could blame law enforcement officials for celebrating. But how they came to suspect DeAngelo, and eventually put him in cuffs, raises troubling questions about what constitutes due process and civil liberty amid the explosive proliferation of commercial DNA testing.
FBI
DNA evidence has been a cornerstone of forensic science for decades, and rightly so. It’s way more accurate than hair or bite-mark analysis. But the routine DNA tests used by crime labs aren’t anything like what you get if you send your spit to a commercial testing company. Cops look at a panel of 20 regions of repeating locations in the genome that don’t code for proteins. Because those repeating sections vary so much from individual to individual, they’re good for matching two samples—but only if the suspect is already in the criminal databases maintained by US law enforcement. Investigators in the Golden State Killer case had long had DNA, but there was no one in their files with which to match it. And so the case went cold.
Companies like 23andMe and Ancestry, on the one hand, probe the coding regions of DNA, to see what mysteries someone’s genes might be hiding—a heightened risk for cancer, or perhaps a long lost cousin. While those areas may be less prone to variation between individual samples, the number of customers who have received these tests—more than 10 million between both services—means that detectives can triangulate an individual. Maybe even a mass murderer. Thanks to the (biological) laws of inheritance, suspected criminals don’t have to have been tested themselves for bits of their DNA to be caught up in the dragnet of a criminal fishing investigation.
So far, these leaders in the consumer DNA testing space have denied ever turning over any customer genetic data to the police. Not that they haven’t been asked for it. According to the 23andMe’s self-reported data, law enforcement has requested information on a total of five American 23andMe customers. Ancestry’s published transparency reports state that it has provided some customer information—but it was in response to requests related to credit card fraud and identity theft, and none of it was genetic in nature.
Representatives from both companies said that police can’t simply upload a DNA profile they have from old crime scenes and sign up for the company’s services, allowing them to find genetic relatives and compare detailed chromosome segment data. Not because impersonating someone necessarily constitutes a violation of the their terms and conditions—people use fake names and email accounts occasionally to maintain privacy—but because they don’t accept digital files. The database entrance fee is a mandatory three milliliters of saliva.
Cops have found ways around this before. In 2014, a New Orleans filmmaker named Michael Usry was arrested for the 1996 murder of an 18-year-old girl in Idaho Falls, after investigators turned up a “partial match” between semen found on the victim’s body and DNA from Usry’s father. A familial connection they found by sifting through DNA samples donated by Mormon churchgoers, including Usry’s father, for a genealogy project. Ancestry later purchased the database and made the genetic profiles (though not the names associated with them) publicly searchable. A search warrant got them to turn over the identity of the partial match.
After 33 days in police custody a DNA test cleared Michael Usry, and Ancestry has since shuttered the database. But it highlighted two big potential problems with this kind of familial searching. On the other are questions of efficacy—nongovernmental databases, whether public or private, haven’t been vetted for use by law enforcement, even as they’re increasingly being used as crime-fighting tools. More worrying though are the privacy concerns—most people who get their DNA tested for the fun of it don’t expect their genetic code might one day be scrutinized by cops. And people who’ve never been tested certainly don’t expect their genes to turn them into suspects.
Those questions get even thornier as more and more people have their DNA tested and then liberate that information from the walled off databases of private companies. GEDmatch’s policies don’t explicitly ask its users to contemplate the risks the wider network might incur on account of any one individual’s choices. “While the results presented on this site are intended solely for genealogical research, we are unable to guarantee that users will not find other uses,” it states. “If you find the possibility unacceptable, please remove your data from this site.” GEDmatch did not immediately respond to a request for comment
Legal experts say investigators wouldn’t break any laws in accessing a publicly available database like GEDmatch, which exists expressly to map that connectivity. “The tension though is that any sample that gets uploaded also is providing information that could to lead to relatives that either haven’t consented to have their information made public, or even know it’s been done,” says Jennifer Mnookin, dean of the UCLA School of Law and a founder of its program on understanding forensic science evidence. “That’s not necessarily wrong, but it leads to a web of information that implicates a population well beyond those who made a decision themselves to be included.”
That’s the same argument that critics have made against more traditional kinds of forensic familial searches—where a partial DNA match reveals any of a suspect’s relatives already in a criminal database. But those searches are at least regulated, to different extent, by federal and state laws. In California, investigators have to get approval from a state Department of Justice committee to run a familial DNA search through a criminal database, which limits use of the technique to particularly heinous crimes. A similar search on a site like GEDmatch requires no such oversight.
In the case of the Golden State Killer, the distinction doesn’t seem that important. But what if police started using these tools for much lesser crimes? “If these techniques became widely used there’s a risk a lot of innocent people would be caught in a web of genetic suspicion and subject to heightened scrutiny,” says Mnookin. While she’s impressed with the ingenuity of the investigators in this case to track down their suspect, she can’t help but see it as a step toward a genetic surveillance state. “That’s what’s hard about this,” she says. “We don’t have a blood taint in this country. Guilt shouldn’t travel by familial association, whether your brother is a felon or an amateur genealogist.”
More Genetic Informants
Did you know your fingerprints contain traces of DNA? Cops do. But that doesn't mean they always get it right.
Here's the full story on how one Mormon ancestry project turned innocent people into crime suspects.
When regular DNA lab tests fails, more and more investigators are turning something called probabilistic genotyping. But can code from an unknown algorithm really deliver justice?
Related Video
Science
Crispr Gene Editing Explained
Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.
Read more: https://www.wired.com/story/detectives-cracked-the-golden-state-killer-case-using-genetics/
from Viral News HQ https://ift.tt/2KzkYBu via Viral News HQ
0 notes
Text
ABAJournal
DNA analysis software -- probabilistic genotyping -- uses an algorithm to create a likelihood ratio that compares DNA samples. But is it fair? Defense lawyers want to peek behind the curtain. https://t.co/uVpImGGLL6 via @ABAJournal. http://pic.twitter.com/ScxQGqwjZC
— ABA Journal (@ABAJournal) December 9, 2017
via Blogger http://ift.tt/2ApAzmL http://ift.tt/20qd6Z0
0 notes
Text
Defense lawyers want to peek behind the curtain of probabilistic genotyping
Science under scrutiny
Built on biology, computer science and statistics, the world of probabilistic genotyping is niche. With few people who can understand… from ABA Journal Daily News - Trials & Litigation http://ift.tt/2AojeHg
0 notes
Text
Prosecutors May Be Asking Too Much of DNA Testing
DNA testing has a reputation for producing concrete evidence that a person committed a crime, or at least was at a crime scene. But too much confidence in genetic testing may be resulting in innocent people being convicted or pleading guilty to Indiana crimes they didn’t commit, in order to avoid the risks of going to trial. The technology of DNA testing may not be mature enough to accurately and reliably do the job prosecutors and police are asking it to do.
The National Institute of Standards and Technology (NIST) has started a new study of some kinds of DNA analysis used by police labs in criminal prosecutions, reports ProPublica, because, “if misapplied, (they) could lead to innocent people being wrongly convicted,” according to the Institute. Its goal is to create a new set of national standards for DNA analysis.
Problems have arisen because labs are trying to identify suspects using incredibly small samples (known as “touch” DNA, which may be just a few skin cells, each about three one hundredths of a millimeter in diameter) using software to try to analyze a mix of more than one person’s genetic material.
Two methods that have come under scrutiny are high-sensitivity testing of trace amounts of DNA and the Forensic Statistic Tool, known as “probabilistic genotyping software.” In one upstate New York criminal case, different DNA testing software came up with two different results, which could mean the difference between a criminal conviction or freedom for a suspect.
John Butler, a DNA expert, will lead the study. He told ProPublica that, although scientific literature on DNA testing goes back two decades, it’s only been in the last ten years that testing has dramatically changed because of better sensitivity. In the past, testing wouldn’t be used if the sample was a mix of DNA from different people, because the process wasn’t sensitive enough to tell what DNA came from whom. Now labs are trying to do that, but the results are uncertain.
A problem with DNA testing is that genetic material can be transferred between people just by shaking hands. Labs may not understand the implications of this ease of transfer. A sample may show someone’s DNA but that may not mean the person is implicated in the crime.
Labs use different software to help interpret results. Prior studies of DNA testing didn’t involve these new systems, and labs are using software without understanding how they work. Some of the software code has been released to the public, and it has raised new questions about accuracy.
Butler compares the complexity of DNA testing of different samples to math. Identifying the DNA of a single person is basic math. If the DNA of two people are mixed (which can happen in a sexual assault case), it’s like doing algebra. But when “touch” samples (which can contain the DNA of several people) are used for testing, that’s calculus. Butler says when “touch” samples are used, it’s not “the gold standard” of certainty, which is when there’s DNA from just one or two people.
“So you’re going into a final exam on calculus, but you’ve only done homework on algebra and basic arithmetic. Are you going to pass that exam? That’s the reality of what we’re facing.” The scientific community and law enforcement aren’t the only ones facing the limits of DNA testing — it’s criminal defendants as well.
McNeely Stephensonhas faithfully served the people and communities of Indiana for several years in a variety of criminal defense cases and will help you create a strategy that gives you the chance for the best possible outcome. With offices in New Albany and attorneys who are licensed to serve the Kentuckiana area, we have the knowledge, experience and resources to help. To ask a question or to set up a consultation.
The post Prosecutors May Be Asking Too Much of DNA Testing appeared first on McNeely Stephenson.
0 notes
Text
Putting crime scene DNA analysis on trial
AP
The National Institute of Standards and Technology announced last week that it is launching a new study of certain types of DNA analysis used in criminal prosecutions.
These methods, “if misapplied, could lead to innocent people being wrongly convicted,” according to the institute’s statement.
NIST will invite all public and private forensics labs in the U.S. to participate by testing the same set of complex DNA samples, and will compare the results, which will be published online next summer.
Its goal is to develop a new set of national standards for DNA analysis.
This study comes at a time when labs are seeking to identify suspects based on especially small samples (such as “touch” DNA, which consists of just a few skin cells), and using software to help analyze mixtures of more than one person’s genetic material.
ProPublica recently investigated the use of two such disputed methods by New York City’s crime lab: high-sensitivity testing of trace amounts of DNA, and the Forensic Statistic Tool, known technically as “probabilistic genotyping software.”
John Butler, a DNA expert and the author of several textbooks on forensic DNA testing, will be leading a team of scientists for the NIST study. He spoke to ProPublica Tuesday from the institute’s offices in Gaithersburg, Maryland.
Why this study, and why now?
Just in the past two years, there has been a huge rush to go into the probabilistic genotyping field, and people are jumping into this without really thinking about a lot of these issues: how sensitivity impacts what they’re doing, how “transfer” and “persistence” of DNA can impact their results, and what they’re doing in terms of the way that they set up their propositions that go into the likelihood ratios of their probabilistic genotyping programs.
The goal of this study is not to do a Consumer Reports on software, that’s not the purpose of this. I know that perhaps some commercial manufacturers may feel like we’re going to unjustly review their software — that’s not the plan.
It’s to see, if presented with mixtures — and people are free to use manual methods or different software systems — what the different responses are. Nobody’s ever really looked at the results from the same samples, across different platforms, to see what happens.
Edw/shtuterstock
There was a criminal case in upstate New York last year where two different commercial programs, TrueAllele and STRmix, came up with two different results for the same DNA evidence, or at least characterized the results in very different ways.
Yes, there are several things going on there. One is, how are they modeling the data collected from the evidence? So you may have the exact same DNA profile, but the modeling of what the profile means, and how the data is evaluated, can be different.
But then the other aspect is, what propositions are put into it? So are you assuming that everyone [in a mixed DNA sample] is unrelated? All those things factor into what the final result is, and so that’s one of the reasons you see a difference.
I think to most readers, the fact that two programs could come up with two different results is really alarming.
Well I think we have to do a better job of trying to explain why those differences exist, and then to really tease them apart. Is it reasonable to get a vastly different result? And what does that mean, to a jury or a judge, or even to the police or prosecutor who are getting the results?
Do they really appreciate what those results mean, or the range of possibilities that are there [in those results]? Just because you have a big number doesn’t mean that you got the right person. That kind of thing.
Will that be something that this study will look at — the way DNA results are explained, as well as how they are obtained?
They go together. If you can’t communicate the results, then you’re not really effective. Why generate them in the first place? That’s been my attitude: There’s just as much effort that needs to go into making sure you convey the right information, and not misinformation, with the DNA test results.
Thomson Reuters
What will this study look like, and what do you hope to find?
We’ll start with a historical perspective on the literature, going back into the last 20 or so years. But it’s really been in the last 10 years that things have changed dramatically, because of the change in [testing] sensitivity. You have people looking at more and more mixtures, which they didn’t have when the sensitivity wasn’t as high, and they weren’t looking down into the weeds for their results.
So I’ve been looking through all the proficiency tests that exist out there now that we can get our hands on, to understand what people have actually been tested for, in mixtures, and then how they’ve all performed, to get our current data points.
Then the other part we’re looking at is, DNA can get transferred between people, like when they shake hands. There have been lots of studies that have been done on this, but a lot of people don’t know about it. So this is to inform the forensic scientists as well.
Understanding the implications of, if you have a really high-sensitivity technique and you’re putting it into a computer program to do testing on whether someone’s DNA is in there or not — just because someone’s DNA is there, what’s the meaning of that?
I know you’ve been looking into FST. One of the challenges is that you have proprietary software systems, and you’ll never be able to get to the bottom of some of those things. Getting access to the code for TrueAllele or for STRmix may never easily happen, because of the commercial environment that they’re in.
And all the interlab studies we’ve ever done before have never really had these systems involved. We have now had a massive sea change in terms of labs moving towards probabilistic genotyping, and not really knowing what they’re doing, in terms of what will be their impact.
ProPublica has actuallyintervened in a federal court case to try gain access to the source code for FST.
My personal opinion is that companies have a right to proprietary information (which would include the source code so that a competitor does not have the ability to steal their hard work). But, with situations like DNA testing, it is important for users to understand what models are used, how data are processed, and the impact of assumptions being made. In other words, be transparent — which is a bedrock scientific principle.
I saw that NIST might also do a similar study on bite mark evidence in the future. So many types of forensic science, from firearms analysis to hair analysis to arson science, have been recently called into question. Some scientists even consider fingerprinting to be controversial now. But is it still fair to characterize DNA as the “gold standard” of forensic science?
In my talk that I gave back in 2015, I made an analogy to math. You have basic math, like two plus two equals four — basic arithmetic — that’s the equivalent of single-source DNA profiling. That works, and that’s your gold standard. When you get to sexual assault evidence — when you have a perpetrator’s and a victim’s DNA mixed together — that’s algebra.
Usually there is a high level of DNA there, and it’s not an issue. But when you get to “touch” evidence, which is what we’re increasingly seeing in the forensic field, that’s calculus. So when we talk about DNA, it’s not that calculus, it’s not the touch evidence that’s the gold standard. The gold standard is the use of DNA with databases in either single-source samples or simple two-person mixtures.
And here’s the challenge: Labs are not prepared to do the complex mixtures. The reality is, all the labs’ proficiency tests, as I’m looking at them, are like basic math or algebra. So you’re going into a final exam on calculus, but you’ve only done homework on algebra and basic arithmetic. Are you going to pass that exam? That’s the reality of what we’re facing.
NOW WATCH: Animated map shows what would happen to Asia if all the Earth's ice melted
from Feedburner http://ift.tt/2g3SB0V
0 notes
Text
Fwd: Postdoc: UCalifornia_Davis.GenomicsAquaticSpecies
Begin forwarded message: > From: [email protected] > Subject: Postdoc: UCalifornia_Davis.GenomicsAquaticSpecies > Date: 16 January 2022 at 05:18:14 GMT > To: [email protected] > > > > The Genomic Variation Laboratory (GVL) at the > University of California Davis seeks a highly > motivated postdoctoral scholar interested in working at the interface of > genomics and aquatic species management. The postdoc will be co-supervised > by Dr. Andrea Schreier (GVL) and Dr. Melinda Baerwald (California > Department of Water Resources ; DWR). The candidate > will be hired through the GVL and attend GVL lab meetings but spend a > significant portion of time conducting research with the DWR Genetic > Monitoring (GeM) lab, co-led by Dr. Baerwald and Dr. Daphne Gille. > > > > The candidate will apply genetic/genomic techniques such as SNP genotyping > and SHERLOCK (CRISPR based genetic identification platform; Baerwald et al. > 2020) to answer applied management questions in the San Francisco Bay-Delta > ecosystem. As a member of a multi-disciplinary and interagency team, the > candidate will collaboratively develop and implement a new approach for > Chinook Salmon race identification combining probabilistic length-at-date > models with rapid and accurate SHERLOCK technology. The candidate will > apply this integrative approach to cooperatively develop a spring-run > Chinook Salmon race identification program in support of calculating > annual juvenile > production estimates > . > The candidate will also work with staff having no prior molecular training > at field-based water export facilities to pilot test the approach’s > efficacy for real-time monitoring. It is anticipated that the candidate > will publish both open access genetic datasets and peer-reviewed > manuscripts based on their research. The Chinook Salmon genetic data > collected by the candidate will encompass most major tributaries in > California’s Central Valley and lend itself to investigations related to > life history strategies, species resiliency, and other topics dependent on > the candidate’s research interests. Although the candidate’s research will > be directed towards meeting management goals, there will be opportunities > to develop independent research projects, mentor junior scientists, and > form collaborations with scientists in academia, state, and federal > agencies. This is an ideal position for someone interested in pursuing a > career in state and federal natural resource management. The candidate will > have access to considerable technological resources in the GeM lab, the > GVL, as well as the UC Davis Genome Center computing cluster and DNA > Technologies sequencing core. > > > > We expect to receive up to three years of funding to support a postdoctoral > scholar, starting between Feb 1 and March 1, 2022. Starting salary is > $54,540 with annual step increases. The GVL is an antiracist lab that > welcomes individuals from all races, ethnicities, religious backgrounds, > gender identities, and sexual orientations to apply. Please see the GVL DEI > page for more > information. > > > > *Qualifications* > > Applicants should have a PhD in ecology, evolution, genetics/genomics, or a > related field. We are looking for an independent and collaborative > scientist with excellent communication skills, management-relevant > experience, and several years of experience conducting molecular genetics > benchwork. > > > > *How to Apply* > > To apply, please email to Andrea Schreier ([email protected]) a brief > cover letter describing your interest in the position, a CV (including > contact info for 3 references), and 2-3 published papers or manuscripts in > preparation. Please specifically indicate in the cover letter or CV the > date (month and year) that the applicant’s PhD was/will be issued. The > position will be open until filled. We would like the selected candidate to > start by the end of March 2022. > > > Best, > > > Andrea > > -- > Andrea Schreier, PhD > Adjunct Associate Professor > Director, Genomic Variation Lab > Meyer Hall 2235 > University of California Davis > Office (530) 752-0664 > Lab (530) 752-6351 > https://ift.tt/32tsVC6 > > > [email protected] > via IFTTT
0 notes
Text
The Creepy Genetics Behind the Golden State Killer Case
For the dozen years between 1974 and 1986, he rained down terror across the state of California. He went by many names: the East Side Rapist, the Visalia Ransacker, the Original Night Stalker, the Golden State Killer. And on Wednesday, law enforcement officials announced they think they finally have his real name: Joseph James DeAngelo. Police arrested the 72-year-old Tuesday; he’s accused of committing more than 50 rapes and 12 murders.
In the end, it wasn’t stakeouts or fingerprints or cell phone records that got him. It was a genealogy website.
FBI
Lead investigator Paul Holes, a retired Contra Costa County District Attorney inspector, told the Mercury News late Thursday night that his team used GEDmatch, a no-frills Florida-based website that pools raw genetic profiles shared publicly by their owners, to find the man believed to be one of California’s most notorious criminals. A spokeswoman for the Sacramento County District Attorney’s Office reached Friday morning would not comment or confirm the report.
GEDmatch—a reference to the data file format GEDCOM, developed by the Mormon church to share genealogical information—caters to curious folks searching for missing relatives or filling in family trees. The mostly volunteer-run platform exists “to provide DNA and genealogy tools for comparison and research services,” the site’s policy page states. Most of its tools for tracking down matches are free; users just have to register and upload copies of their raw DNA files exported from genetic testing services like 23andMe and Ancestry. These two companies don’t allow law enforcement to access their customer databases unless they get a court order. Neither 23andMe nor Ancestry was approached by investigators in this case, according to spokespeople for the companies.
But no court order would be needed to mine GEDmatch’s open-source database of more than 650,000 genetically connected profiles. Using sequence data somehow wrung from old crime scene samples, police could create a genetic profile for their suspect and and upload it to the free site. As the Sacramento Bee first reported, that gave them a pool of relatives who all shared some of that incriminating genetic material. Then they could use other clues—like age and sex and place of residence—to rule out suspects. Eventually the search narrowed down to just DeAngelo. To confirm their suspicions, police staked out his Citrus Heights home and obtained his DNA from something he discarded, then ran it against multiple crime scene samples. They were a match.
“It’s fitting that today is National DNA Day,” said Anne Marie Schubert, the Sacramento district attorney, at a press conference announcing the arrest Wednesday afternoon. A champion of genetic forensics, Schubert convened a task force two years ago to re-energize the cold case with DNA technology. “We found the needle in the haystack, and it was right here in Sacramento.”
After four decades of failure, no one could blame law enforcement officials for celebrating. But how they came to suspect DeAngelo, and eventually put him in cuffs, raises troubling questions about what constitutes due process and civil liberty amid the explosive proliferation of commercial DNA testing.
FBI
DNA evidence has been a cornerstone of forensic science for decades, and rightly so. It’s way more accurate than hair or bite-mark analysis. But the routine DNA tests used by crime labs aren’t anything like what you get if you send your spit to a commercial testing company. Cops look at a panel of 20 regions of repeating locations in the genome that don’t code for proteins. Because those repeating sections vary so much from individual to individual, they’re good for matching two samples—but only if the suspect is already in the criminal databases maintained by US law enforcement. Investigators in the Golden State Killer case had long had DNA, but there was no one in their files with which to match it. And so the case went cold.
Companies like 23andMe and Ancestry, on the one hand, probe the coding regions of DNA, to see what mysteries someone’s genes might be hiding—a heightened risk for cancer, or perhaps a long lost cousin. While those areas may be less prone to variation between individual samples, the number of customers who have received these tests—more than 10 million between both services—means that detectives can triangulate an individual. Maybe even a mass murderer. Thanks to the (biological) laws of inheritance, suspected criminals don’t have to have been tested themselves for bits of their DNA to be caught up in the dragnet of a criminal fishing investigation.
So far, these leaders in the consumer DNA testing space have denied ever turning over any customer genetic data to the police. Not that they haven’t been asked for it. According to the 23andMe’s self-reported data, law enforcement has requested information on a total of five American 23andMe customers. Ancestry’s published transparency reports state that it has provided some customer information—but it was in response to requests related to credit card fraud and identity theft, and none of it was genetic in nature.
Representatives from both companies said that police can’t simply upload a DNA profile they have from old crime scenes and sign up for the company’s services, allowing them to find genetic relatives and compare detailed chromosome segment data. Not because impersonating someone necessarily constitutes a violation of the their terms and conditions—people use fake names and email accounts occasionally to maintain privacy—but because they don’t accept digital files. The database entrance fee is a mandatory three milliliters of saliva.
Cops have found ways around this before. In 2014, a New Orleans filmmaker named Michael Usry was arrested for the 1996 murder of an 18-year-old girl in Idaho Falls, after investigators turned up a “partial match” between semen found on the victim’s body and DNA from Usry’s father. A familial connection they found by sifting through DNA samples donated by Mormon churchgoers, including Usry’s father, for a genealogy project. Ancestry later purchased the database and made the genetic profiles (though not the names associated with them) publicly searchable. A search warrant got them to turn over the identity of the partial match.
After 33 days in police custody a DNA test cleared Michael Usry, and Ancestry has since shuttered the database. But it highlighted two big potential problems with this kind of familial searching. On the other are questions of efficacy—nongovernmental databases, whether public or private, haven’t been vetted for use by law enforcement, even as they’re increasingly being used as crime-fighting tools. More worrying though are the privacy concerns—most people who get their DNA tested for the fun of it don’t expect their genetic code might one day be scrutinized by cops. And people who’ve never been tested certainly don’t expect their genes to turn them into suspects.
Those questions get even thornier as more and more people have their DNA tested and then liberate that information from the walled off databases of private companies. GEDmatch’s policies don’t explicitly ask its users to contemplate the risks the wider network might incur on account of any one individual’s choices. “While the results presented on this site are intended solely for genealogical research, we are unable to guarantee that users will not find other uses,” it states. “If you find the possibility unacceptable, please remove your data from this site.” GEDmatch did not immediately respond to a request for comment
Legal experts say investigators wouldn’t break any laws in accessing a publicly available database like GEDmatch, which exists expressly to map that connectivity. “The tension though is that any sample that gets uploaded also is providing information that could to lead to relatives that either haven’t consented to have their information made public, or even know it’s been done,” says Jennifer Mnookin, dean of the UCLA School of Law and a founder of its program on understanding forensic science evidence. “That’s not necessarily wrong, but it leads to a web of information that implicates a population well beyond those who made a decision themselves to be included.”
That’s the same argument that critics have made against more traditional kinds of forensic familial searches—where a partial DNA match reveals any of a suspect’s relatives already in a criminal database. But those searches are at least regulated, to different extent, by federal and state laws. In California, investigators have to get approval from a state Department of Justice committee to run a familial DNA search through a criminal database, which limits use of the technique to particularly heinous crimes. A similar search on a site like GEDmatch requires no such oversight.
In the case of the Golden State Killer, the distinction doesn’t seem that important. But what if police started using these tools for much lesser crimes? “If these techniques became widely used there’s a risk a lot of innocent people would be caught in a web of genetic suspicion and subject to heightened scrutiny,” says Mnookin. While she’s impressed with the ingenuity of the investigators in this case to track down their suspect, she can’t help but see it as a step toward a genetic surveillance state. “That’s what’s hard about this,” she says. “We don’t have a blood taint in this country. Guilt shouldn’t travel by familial association, whether your brother is a felon or an amateur genealogist.”
More Genetic Informants
Did you know your fingerprints contain traces of DNA? Cops do. But that doesn't mean they always get it right.
Here's the full story on how one Mormon ancestry project turned innocent people into crime suspects.
When regular DNA lab tests fails, more and more investigators are turning something called probabilistic genotyping. But can code from an unknown algorithm really deliver justice?
Related Video
Science
Crispr Gene Editing Explained
Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.
Read more: https://www.wired.com/story/detectives-cracked-the-golden-state-killer-case-using-genetics/
from Viral News HQ https://ift.tt/2KzkYBu via Viral News HQ
0 notes
Text
Adaptive Evolution of Gene Expression in Drosophila
Publication date: 8 August 2017 Source:Cell Reports, Volume 20, Issue 6 Author(s): Armita Nourmohammad, Joachim Rambeau, Torsten Held, Viera Kovacova, Johannes Berg, Michael Lässig Gene expression levels are important quantitative traits that link genotypes to molecular functions and fitness. In Drosophila, population-genetic studies have revealed substantial adaptive evolution at the genomic level, but the evolutionary modes of gene expression remain controversial. Here, we present evidence that adaptation dominates the evolution of gene expression levels in flies. We show that 64% of the observed expression divergence across seven Drosophila species are adaptive changes driven by directional selection. Our results are derived from time-resolved data of gene expression divergence across a family of related species, using a probabilistic inference method for gene-specific selection. Adaptive gene expression is stronger in specific functional classes, including regulation, sensory perception, sexual behavior, and morphology. Moreover, we identify a large group of genes with sex-specific adaptation of expression, which predominantly occurs in males. Our analysis opens an avenue to map system-wide selection on molecular quantitative traits independently of their genetic basis.
Graphical abstract
Teaser
Drosophila presents an evolutionary conundrum: there is ubiquitous genomic adaptation, yet it has been impossible to identify system-wide signals of adaptation for gene expression. Nourmohammad et al. develop a method to infer stabilizing and directional selection from expression data. They show that adaptation dominates the evolution of gene expression in Drosophila. http://ift.tt/2uo7wNb from OtoRhinoLaryngology - Alexandros G. Sfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2vM5Yfs
0 notes