dorcasrempel
dorcasrempel
Dorcas Rempel
2K posts
I was working as quality manager in one of the best company in the UK. I help homeowners to update their homes, I have got much more knowledge of marketing and quality testing of each and every product of home improvement products. We provide easier and less expensive ways to improve your home, I hope that my honest feedback and articles will help consumers to make the best decision.
Don't wanna be here? Send us removal request.
dorcasrempel · 5 years ago
Text
J-PAL North America launches research initiative to focus on Covid-19 recovery
The Covid-19 pandemic has resulted in incalculable losses for millions of Americans, particularly among low-income communities and communities of color. As decision-makers work to address this unparalleled public health crisis, urgent questions remain on how the Covid-19 pandemic will impact the social and economic well-being of people in the United States once the immediate crisis has resolved.
This summer, J-PAL North America launched a new research initiative that aims to inform these pressing policy questions. The COVID-19 Recovery and Resilience Initiative will catalyze research on how to recover in the aftermath of the pandemic, with a focus on improving outcomes for those most harmed by this crisis. Academic leadership for the initiative will be provided by J-PAL North America’s scientific directors: Amy Finkelstein (MIT) and Lawrence Katz (Harvard University). 
The pandemic has laid bare fundamental inequities that limit access to opportunity for low-income communities and communities of color, making the need for bold policy action all the more pressing. Policymakers and social sector leaders are seeking solutions to respond to the Covid-19 pandemic both in the immediate and long term. Evidence will be a critical tool to help determine which policies — from universal basic income to extended school years — will work to restart the economy and rebuild lives.
The Covid-19 Recovery and Resilience Initiative will seek to advance the dual goals of helping decision-makers implement evidence-based solutions in the immediate term while generating rigorous evidence on longer-term policy measures. Ultimately, J-PAL North America aims to create a playbook of evidence-based solutions by identifying key policy challenges, rigorously evaluating promising solutions, and sharing findings with decision-makers who can act on rigorous evidence to improve lives. 
Drawing on insights from J-PAL’s network of leading academic scholars, the initiative established a learning agenda to guide work in the priority policy areas of (1) jobs, labor, and the social safety net; (2) education, youth, and opportunity; and (3) health care delivery. 
These guides for future research outline a selection of prioritized research questions that, if answered, could significantly advance decision-makers’ understanding of how to effectively respond to this crisis. While not intended to be comprehensive, the research guides aim to serve as inspiration for researchers and as a resource to guide investment strategies for donors.
Prioritized questions in the policy areas of jobs, labor, and the social safety net focus on methods to support individuals who are unemployed in the short- and long-term, keep workers connected to benefits, and effectively smooth the job search process. In education, questions largely center on the need to address learning loss, minimize the widening of income- and race-based educational inequities, and support students’ mental health and social-emotional development. Lastly, prioritized focus areas for future inquiry in health care delivery include identifying methods to expand access to quality and affordable health care, increase take-up of positive health behaviors, and minimize the public health risks of additional waves of Covid-19. 
For more information on the Covid-19 Recovery and Resilience Initiative or the initiative research agenda, see the J-PAL North America website or contact Initiative Manager Vincent Quan.
J-PAL North America launches research initiative to focus on Covid-19 recovery syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Astronomers discover an Earth-sized “pi planet” with a 3.14-day orbit
In a delightful alignment of astronomy and mathematics, scientists at MIT and elsewhere have discovered a “pi Earth” — an Earth-sized planet that zips around its star every 3.14 days, in an orbit reminiscent of the universal mathematics constant.
The researchers discovered signals of the planet in data taken in 2017 by the NASA Kepler Space Telescope’s K2 mission. By zeroing in on the system earlier this year with SPECULOOS, a network of ground-based telescopes, the team confirmed that the signals were of a planet orbiting its star. And indeed, the planet appears to still be circling its star today, with a pi-like period, every 3.14 days.
“The planet moves like clockwork,” says Prajwal Niraula, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), who is the lead author of a paper published today in the Astronomical Journal, titled: “π Earth: a 3.14-day Earth-sized Planet from K2’s Kitchen Served Warm by the SPECULOOS Team.”
“Everyone needs a bit of fun these days,” says co-author Julien de Wit, of both the paper title and the discovery of the pi planet itself.
Planet extraction
The new planet is labeled K2-315b; it’s the 315th planetary system discovered within K2 data — just one system shy of an even more serendipitous place on the list.
The researchers estimate that K2-315b has a radius of 0.95 that of Earth’s, making it just about Earth-sized. It orbits a cool, low-mass star that is about one-fifth the size of the sun. The planet circles its star every 3.14 days, at a blistering 81 kilometers per second, or about 181,000 miles per hour.
While its mass is yet to be determined, scientists suspect that K2-315b is terrestrial, like the Earth. But the pi planet is likely not habitable, as its tight orbit brings the planet close enough to its star to heat its surface up to 450 kelvins, or around 350 degrees Fahrenheit — perfect, as it turns out, for baking actual pie.
“This would be too hot to be habitable in the common understanding of the phrase,” says Niraula, who adds that the excitement around this particular planet, aside from its associations with the mathematical constant pi, is that it may prove a promising candidate for studying the characteristics of its atmosphere.
“We now know we can mine and extract planets from archival data, and hopefully there will be no planets left behind, especially these really important ones that have a high impact,” says de Wit, who is an assistant professor in EAPS, and a member of MIT’s Kavli Institute for Astrophysics and Space Research.
Niraula and de Wit’s MIT co-authors include Benjamin Rackham and Artem Burdanov, along with a team of international collaborators.
Dips in the data
The researchers are members of SPECULOOS, an acronym for The Search for habitable Planets EClipsing ULtra-cOOl Stars, and named for a network of four 1-meter telescopes in Chile’s Atacama Desert, which scan the sky across the southern hemisphere. Most recently, the network added a fifth telescope, which is the first to be located in the northern hemisphere, named Artemis — a project that was spearheaded by researchers at MIT.
The SPECULOOS telescopes are designed to search for Earth-like planets around nearby, ultracool dwarfs — small, dim stars that offer astronomers a better chance of spotting an orbiting planet and characterizing its atmosphere, as these stars lack the glare of much larger, brighter stars.
“These ultracool dwarfs are scattered all across the sky,” Burdanov says. “Targeted ground-based surveys like SPECULOOS are helpful because we can look at these ultracool dwarfs one by one.”
In particular, astronomers look at individual stars for signs of transits, or periodic dips in a star’s light, that signal a possible planet crossing in front of the star, and briefly blocking its light.
Earlier this year, Niraula came upon a cool dwarf, slightly warmer than the commonly accepted threshold for an ultracool dwarf, in data collected by the K2 campaign — the Kepler Space Telescope’s second observing mission, which monitored slivers of the sky as the spacecraft orbited around the sun.
Over several months in 2017, the Kepler telescope observed a part of the sky that included the cool dwarf, labeled in the K2 data as EPIC 249631677. Niraula combed through this period and found around 20 dips in the light of this star, that seemed to repeat every 3.14 days.
The team analyzed the signals, testing different potential astrophysical scenarios for their origin, and confirmed that the signals were likely of a transiting planet, and not a product of some other phenomena such as a binary system of two spiraling stars.
The researchers then planned to get a closer look at the star and its orbiting planet with SPECULOOS. But first, they had to identify a window of time when they would be sure to catch a transit.
“Nailing down the best night to follow up from the ground is a little bit tricky,” says Rackham, who developed a forecasting algorithm to predict when a transit might next occur. “Even when you see this 3.14 day signal in the K2 data, there’s an uncertainty to that, which adds up with every orbit.”
With Rackham’s forecasting algorithm, the group narrowed in on several nights in February 2020 during which they were likely to see the planet crossing in front of its star. They then pointed SPECULOOS’ telescopes in the direction of the star and were able to see three clear transits: two with the network’s Southern Hemisphere telescopes, and the third from Artemis, in the Northern Hemisphere.
The researchers say the new pi planet may be a promising candidate to follow up with the James Webb Space Telescope (JWST), to see details of the planet’s atmosphere. For now, the team is looking through other datasets, such as from NASA’s TESS mission, and are also directly observing the skies with Artemis and the rest of the SPECULOOS network, for signs of Earthlike planets.
“There will be more interesting planets in the future, just in time for JWST, a telescope designed to probe the atmosphere of these alien worlds,” says Niraula. “With better algorithms, hopefully one day, we can look for smaller planets, even as small as Mars.”
This research was supported in part by the Heising-Simons Foundation, and the European Research Council.
Astronomers discover an Earth-sized “pi planet” with a 3.14-day orbit syndicated from https://osmowaterfilters.blogspot.com/
1 note · View note
dorcasrempel · 5 years ago
Text
Engineers produce a fisheye lens that’s completely flat
To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.
Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.
In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.
The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.
“This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.
This isn’t just light-bending — it’s mind-bending.”
Hu and his colleagues have published their results today in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.
Design on the back side
Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.
Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.
Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.
In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.
‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.
On the front side, the team placed an optical aperture, or opening for light.
“When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”
Across the panorama
In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.
“It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.
In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.
“The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”
The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.
The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.
“Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.”
This research was funded in part by DARPA under the EXTREME Program.
Engineers produce a fisheye lens that’s completely flat syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Helping robots avoid collisions
George Konidaris still remembers his disheartening introduction to robotics.
“When you’re a young student and you want to program a robot, the first thing that hits you is this immense disappointment at how much you can’t do with that robot,” he says.
Most new roboticists want to program their robots to solve interesting, complex tasks — but it turns out that just moving them through space without colliding with objects is more difficult than it sounds.
Fortunately, Konidaris is hopeful that future roboticists will have a more exciting start in the field. That’s because roughly four years ago, he co-founded Realtime Robotics, a startup that’s solving the “motion planning problem” for robots.
The company has invented a solution that gives robots the ability to quickly adjust their path to avoid objects as they move to a target. The Realtime controller is a box that can be connected to a variety of robots and deployed in dynamic environments.
“Our box simply runs the robot according to the customer’s program,” explains Konidaris, who currently serves as Realtime’s chief roboticist. “It takes care of the movement, the speed of the robot, detecting obstacles, collision detection. All [our customers] need to say is, ‘I want this robot to move here.’”
Realtime’s key enabling technology is a unique circuit design that, when combined with proprietary software, has the effect of a plug-in motor cortex for robots. In addition to helping to fulfill the expectations of starry-eyed roboticists, the technology also represents a fundamental advance toward robots that can work effectively in changing environments.
Helping robots get around
Konidaris was not the first person to get discouraged about the motion planning problem in robotics. Researchers in the field have been working on it for 40 years. During a four-year postdoc at MIT, Konidaris worked with School of Engineering Professor in Teaching Excellence Tomas Lozano-Perez, a pioneer in the field who was publishing papers on motion planning before Konidaris was born.
Humans take collision avoidance for granted. Konidaris points out that the simple act of grabbing a beer from the fridge actually requires a series of tasks such as opening the fridge, positioning your body to reach in, avoiding other objects in the fridge, and deciding where to grab the beer can.
“You actually need to compute more than one plan,” Konidaris says. “You might need to compute hundreds of plans to get the action you want. … It’s weird how the simplest things humans do hundreds of times a day actually require immense computation.”
In robotics, the motion planning problem revolves around the computational power required to carry out frequent tests as robots move through space. At each stage of a planned path, the tests help determine if various tiny movements will make the robot collide with objects around it. Such tests have inspired researchers to think up ever more complicated algorithms in recent years, but Konidaris believes that’s the wrong approach.
“People were trying to make algorithms smarter and more complex, but usually that’s a sign that you’re going down the wrong path,” Konidaris says. “It’s actually not that common that super technically sophisticated techniques solve problems like that.”
Konidaris left MIT in 2014 to join the faculty at Duke University, but he continued to collaborate with researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Duke is also where Konidaris met Realtime co-founders Sean Murray, Dan Sorin, and Will Floyd-Jones. In 2015, the co-founders collaborated to make a new type of computer chip with circuits specifically designed to perform the frequent collision tests required to move a robot safely through space. The custom circuits could perform operations in parallel to more efficiently test short motion collisions.
“When I left MIT for Duke, one thing bugging me was this motion planning thing should really be solved by now,” Konidaris says. “It really did come directly out of a lot of experiences at MIT. I wouldn’t have been able to write a single paper on motion planning before I got to MIT.”
The researchers founded Realtime in 2016 and quickly brought on robotics industry veteran Peter Howard MBA ’87, who currently serves as Realtime’s CEO and is also considered a co-founder.
“I wanted to start the company in Boston because I knew MIT and lot of robotics work was happening there,” says Konidaris, who moved to Brown University in 2016. “Boston is a hub for robotics. There’s a ton of local talent, and I think a lot of that is because MIT is here — PhDs from MIT became faculty at local schools, and those people started robotics programs. That network effect is very strong.”
Removing robot restraints
Today the majority of Realtime’s customers are in the automotive, manufacturing, and logistics industries. The robots using Realtime’s solution are doing everything from spot welding to making inspections to picking items from bins.
After customers purchase Realtime’s control box, they load in a file describing the configuration of the robot’s work cell, information about the robot such as its end-of-arm tool, and the task the robot is completing. Realtime can also help optimally place the robot and its accompanying sensors around a work area. Konidaris says Realtime can shorten the process of deploying robots from an average of 15 weeks to one week.
Once the robot is up and running, Realtime’s box controls its movement, giving it instant collision-avoidance capabilities.
“You can use it for any robot,” Konidaris says. “You tell it where it needs to go and we’ll handle the rest.”
Realtime is part of MIT’s Industrial Liaison Program (ILP), which helps companies make connections with larger industrial partners, and it recently joined ILP’s STEX25 startup accelerator.
With a few large rollouts planned for the coming months, the Realtime team’s excitement is driven by the belief that solving a problem as fundamental as motion planning unlocks a slew of new applications for the robotics field.
“What I find most exciting about Realtime is that we are a true technology company,” says Konidaris. “The vast majority of startups are aimed at finding a new application for existing technology; often, there’s no real pushing of the technical boundaries with a new app or website, or even a new robotics ‘vertical.’ But we really did invent something new, and that edge and that energy is what drives us. All of that feels very MIT to me.”
Helping robots avoid collisions syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Rapid test for Covid-19 shows improved sensitivity
Since the start of the Covid-19 pandemic, researchers at MIT and the Broad Institute of MIT and Harvard, along with their collaborators at the University of Washington, Fred Hutchinson Cancer Research Center, Brigham and Women’s Hospital, and the Ragon Institute, have been working on a CRISPR-based diagnostic for Covid-19 that can produce results in 30 minutes to an hour, with similar accuracy as the standard PCR diagnostics now used.
The new test, known as STOPCovid, is still in the research stage but, in principle, could be made cheaply enough that people could test themselves every day. In a study appearing today in the New England Journal of Medicine, the researchers showed that on a set of patient samples, their test detected 93 percent of the positive cases as determined by PCR tests for Covid-19.
“We need rapid testing to become part of the fabric of this situation so that people can test themselves every day, which will slow down outbreak,” says Omar Abudayyeh, an MIT McGovern Fellow working on the diagnostic.
Abudayyah is one of the senior authors of the study, along with Jonathan Gootenberg, a McGovern Fellow, and Feng Zhang, a core member of the Broad Institute, investigator at the MIT McGovern Institute and Howard Hughes Medical Institute, and the James and Patricia Poitras ’63 Professor of Neuroscience at MIT. The first authors of the paper are MIT biological engineering graduate students Julia Joung and Alim Ladha in the Zhang lab.
A streamlined test
Zhang’s laboratory began collaborating with the Abudayyeh and Gootenberg laboratory to work on the Covid-19 diagnostic soon after the SARS-CoV-2 outbreak began. They focused on making an assay, called STOPCovid, that was simple to carry out and did not require any specialized laboratory equipment. Such a test, they hoped, would be amenable to future use in point-of-care settings, such as doctors’ offices, pharmacies, nursing homes, and schools. 
“We developed STOPCovid so that everything could be done in a single step,” Joung says. “A single step means the test can be potentially performed by nonexperts outside of laboratory settings.”
In the new version of STOPCovid reported today, the researchers incorporated a process to concentrate the viral genetic material in a patient sample by adding magnetic beads that attract RNA, eliminating the need for expensive purification kits that are time-intensive and can be in short supply due to high demand. This concentration step boosted the test’s sensitivity so that it now approaches that of PCR.
“Once we got the viral genomes onto the beads, we found that that could get us to very high levels of sensitivity,” Gootenberg says.
Working with collaborators Keith Jerome at Fred Hutchinson Cancer Research Center and Alex Greninger at the University of Washington, the researchers tested STOPCovid on 402 patient samples — 202 positive and 200 negative — and found that the new test detected 93 percent of the positive cases as determined by the standard CDC PCR test.
“Seeing STOPCovid working on actual patient samples was really gratifying,” Ladha says.
They also showed, working with Ann Woolley and Deb Hung at Brigham and Women’s Hospital, that the STOPCovid test works on samples taken using the less invasive anterior nares swab. They are now testing it with saliva samples, which could make at-home tests even easier to perform. The researchers are continuing to develop the test with the hope of delivering it to end users to help fight the COVID-19 pandemic.
“The goal is to make this test easy to use and sensitive, so that we can tell whether or not someone is carrying the virus as early as possible,” Zhang says.
The research was funded by the National Institutes of Health, the Swiss National Science Foundation, the Patrick J. McGovern Foundation, the McGovern Institute for Brain Research, the Massachusetts Consortium on Pathogen Readiness Evergrande Covid-19 Response Fund, the Mathers Foundation, the Howard Hughes Medical Institute, the Open Philanthropy Project, J. and P. Poitras, and R. Metcalfe.
Rapid test for Covid-19 shows improved sensitivity syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Making tuberculosis more susceptible to antibiotics
Every living cell is coated with a distinctive array of carbohydrates, which serves as a unique cellular “ID” and helps to manage the cell’s interactions with other cells.
MIT chemists have now discovered that changing the length of these carbohydrates can dramatically affect their function. In a study of mycobacteria, the type of bacteria that cause tuberculosis and other diseases, they found that shortening the length of a carbohydrate called galactan impairs some cell functions and makes the cells much more susceptible to certain antibiotics.
The findings suggest that drugs that interfere with galactan synthesis could be used along with existing antibiotics to create more effective treatments, says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.
“There are a lot of TB strains that are resistant to the current set of antibiotics,” Kiessling says. “TB kills over a million people every year and is the number one infectious disease killer.”
Former MIT graduate student Alexander Justen is the lead author of the paper, which appears today in Science Advances.
The long and short of it
Galactan, a polysaccharide, is a component of the cell wall of mycobacteria, but little is known about its function. Until now, its only known role was to form links between molecules called peptidoglycans, which make up most of the bacterial cell wall, and other sugars and lipids. However, the version of galactan found in mycobacteria is much longer than it needs to be to perform this linker function.
“What was so strange is that the galactan is about 30 sugar molecules long, but the branch points for the other sugars that it links to are at eight, 10, and 12. So, why is the cell expending so much energy to make galactan longer than 12 units?” Kiessling says.
That question led Kiessling and her research group to investigate what might happen if galactan were shorter. A team led by Justen genetically engineered a type of mycobacteria called Mycobacterium smegmatis (which is related to Mycobacterium tuberculosis but is not harmful to humans) so that their galactan chains would contain only 12 sugar molecules.
As a result of this shortening, cells lost their usual shape and developed “blebs,” or bulges from their cell membranes. Shortening galactan also shrank the size of a compartment called the periplasm, a space that is found between a bacterial cell’s inner and outer cell membranes. This compartment is involved in absorbing nutrients from the cell’s environment.
Truncating galactan also made the cells more susceptible to certain antibiotics — specifically, antibiotics that are hydrophobic. Mycobacteria cell walls are relatively impermeable to hydrophobic antibiotics, but the shortened galactan molecules make the cells more permeable, so these drugs can get inside more easily.
“This suggests that drugs that would lead to these truncated chains could be valuable in combination with hydrophobic antibiotics,” Kiessling says. “I think it validates this part of the cell as a good target.”
Her lab is currently working on developing drugs that could block galactan synthesis, which is not targeted by any existing TB drugs. Patients with TB are usually given drug combinations that have to be taken for six months, and some strains have developed resistance to the existing drugs.
Unexpected roles
Kiessling’s lab is also studying the question of why it is useful for bacteria to alter the length of their carbohydrate molecules. One hypothesis is that it helps them to shield themselves from the immune system, she says. Some studies have shown that a dense coating of longer carbohydrate chains could help to achieve a stealth effect by preventing host immune cells from interacting with proteins on the bacterial cell surface.
If that hypothesis is confirmed, then drugs that interfere with the length of galactan or other carbohydrates might also help the immune system fight off bacterial infection, Kiessling says. This could be useful for treating not only tuberculosis but also other diseases caused by mycobacteria, such as chronic obstructive pulmonary disease (COPD) and leprosy. Other strains of mycobacteria (known as “flesh-eating bacteria”) cause a potentially deadly infection called necrotizing fasciitis. All of these mycobacteria have galactan in their cell walls, and there are no good vaccines against any of them.
Although the research may end up helping scientists to develop better drugs, Kiessling first became interested in this topic as a basic science question.
“The reason I like this paper is because while it does have implications for treating tuberculosis, it also shows a fundamentally new role for carbohydrates, which I love. People are finding that they can have unexpected roles, and this is another unexpected result,” she says.
The research was funded by the National Institute of Allergy and Infectious Disease and the National Institutes of Health Common Fund.
Making tuberculosis more susceptible to antibiotics syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Highly sensitive trigger enables rapid detection of biological agents
Any space, enclosed or open, can be vulnerable to the dispersal of harmful airborne biological agents. Silent and near-invisible, these bioagents can sicken or kill living things before steps can be taken to mitigate the bioagents’ effects. Venues where crowds congregate are prime targets for biowarfare strikes engineered by terrorists, but expanses of fields or forests could be victimized by an aerial bioattack. Early warning of suspicious biological aerosols can speed up remedial responses to releases of biological agents; the sooner cleanup and treatment begin, the better the outcome for the sites and people affected.  
MIT Lincoln Laboratory researchers have developed a highly sensitive and reliable trigger for the U.S. military’s early warning system for biological warfare agents.
“The trigger is the key mechanism in a detection system because its continual monitoring of the ambient air in a location picks up the presence of aerosolized particles that may be threat agents,” says Shane Tysk, principal investigator of the laboratory’s bioaerosol trigger, the Rapid Agent Aerosol Detector (RAAD), and a member of the technical staff in the laboratory’s Advanced Materials and Microsystems Group.
The trigger cues the detection system to collect particle specimens and then to initiate the process to identify particles as potentially dangerous bioagents. The RAAD has demonstrated a significant reduction in false positive rates while maintaining detection performance that matches or exceeds that of today’s best deployed systems. Additionally, early testing has shown that the RAAD has significantly improved reliability compared to currently deployed systems.
RAAD process
The RAAD determines the presence of biological warfare agents through a multistep process. First, aerosols are pulled into the detector by the combined agency of an aerosol cyclone that uses high-speed rotation to cull out the small particles, and an aerodynamic lens that focuses the particles into a condensed (i.e., enriched) volume, or beam, of aerosol. The RAAD aerodynamic lens provides more efficient aerosol enrichment than any other air-to-air concentrator.
Then, a near-infrared (NIR) laser diode creates a structured trigger beam that detects the presence, size, and trajectory of an individual aerosol particle. If the particle is large enough to adversely affect the respiratory tract — roughly 1 to 10 micrometers — a 266-nanometer ultravolet (UV) laser is activated to illuminate the particle, and multiband laser-induced fluorescence is collected.
The detection process continues as an embedded logic decision, referred to as the “spectral trigger,” uses scattering from the NIR light and UV fluorescence data to predict if the particle’s composition appears to correspond to that of a threat-like bioagent. “If the particle seems threat-like, then spark-induced breakdown spectroscopy is enabled to vaporize the particle and collect atomic emission to characterize the particle’s elemental content,” says Tysk.
Spark-induced breakdown spectroscopy is the last measurement stage. This spectroscopy system measures the elemental content of the particle, and its measurements involve creating a high-temperature plasma, vaporizing the aerosol particle, and measuring the atomic emission from the thermally excited states of the aerosol. 
The measurement stages — structured trigger beam, UV-excited fluorescence, and spark-induced breakdown spectroscopy — are integrated into a tiered system that provides seven measurements on each particle of interest. Of the hundreds of particles entering the measurement process each second, a small subset of particles are down-selected for measurement in all three stages. The RAAD algorithm searches the data stream for changes in the particle set’s temporal and spectral characteristics. If a sufficient number of threat-like particles are found, the RAAD issues an alarm that a biological aerosol threat is present.
RAAD design advantages
“Because RAAD is intended to be operated 24 hours a day, seven days a week for long periods, we incorporated a number of features and technologies to improve system reliability and make the RAAD easy to maintain,” says Brad Perkins, another staff member on the RAAD development team. For example, Perkins goes on to explain, the entire air-handling unit is a module that is mounted on the exterior of the RAAD to allow for easy servicing of the items most likely to need replacement, such as filters, the air-to-air concentrator, and pumps that wear out with use.
To improve detection reliability, the RAAD team chose to use carbon-filtered, HEPA-filtered, and dehumidified sheathing air and purge air (compressed air that pushes out extraneous gases) around the optical components. This approach ensures that contaminants from the outside air do not deposit onto the optical surfaces of the RAAD, potentially causing reductions in sensitivity or false alarms.
The RAAD has undergone more than 16,000 hours of field testing, during which it has demonstrated an extremely low false-alarm rate that is unprecedented for a biological trigger with such a high level of sensitivity. “What sets RAAD apart from its competitors is the number, variety, and fidelity of the measurements made on each individual aerosol particle,” Tysk says. These multiple measurements on individual aerosol particles as they flow through the system enable the trigger to accurately discriminate biological warfare agents from ambient air at a rapid rate. Because RAAD does not name the particular bioagent detected, further laboratory testing of the specimen would have to be done to determine its exact identity.
The RAAD was developed under sponsorship from the Defense Threat Reduction Agency and Joint Program Executive Office for CBRN Defense. The technology is currently being transitioned for production from Lincoln Laboratory to Chemring Sensors and Electronic Systems.
Highly sensitive trigger enables rapid detection of biological agents syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Astronomers may have found a signature of life on Venus
The search for life beyond Earth has largely revolved around our rocky red neighbor. NASA has launched multiple rovers over the years, with a new one currently en route, to sift through Mars’ dusty surface for signs of water and other hints of habitability.
Now, in a surprising twist, scientists at MIT, Cardiff University, and elsewhere have observed what may be signs of life in the clouds of our other, even closer planetary neighbor, Venus. While they have not found direct evidence of living organisms there, if their observation is indeed associated with life, it must be some sort of “aerial” life-form in Venus’ clouds — the only habitable portion of what is otherwise a scorched and inhospitable world. Their discovery and analysis is published today in the journal Nature Astronomy.
The astronomers, led by Jane Greaves of Cardiff University, detected in Venus’ atmosphere a spectral fingerprint, or light-based signature, of phosphine. MIT scientists have previously shown that if this stinky, poisonous gas were ever detected on a rocky, terrestrial planet, it could only be produced by a living organism there. The researchers made the detection using the James Clerk Maxwell Telescope (JCMT) in Hawaii, and the Atacama Large Millimeter Array (ALMA) observatory in Chile.
The MIT team followed up the new observation with an exhaustive analysis to see whether anything other than life could have produced phosphine in Venus’ harsh, sulfuric environment. Based on the many scenarios they considered, the team concludes that there is no explanation for the phosphine detected in Venus’ clouds, other than the presence of life.
“It’s very hard to prove a negative,” says Clara Sousa-Silva, research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Now, astronomers will think of all the ways to justify phosphine without life, and I welcome that. Please do, because we are at the end of our possibilities to show abiotic processes that can make phosphine.”
“This means either this is life, or it’s some sort of physical or chemical process that we do not expect to happen on rocky planets,” adds co-author and EAPS Research Scientist Janusz Petkowski.
The other MIT co-authors include William Bains, Sukrit Ranjan, Zhuchang Zhan, and Sara Seager, who is the Class of 1941 Professor of Planetary Science with appointments in the departments of Physics and of Aeronautics and Astronautics, along with collaborators at Cardiff University, the University of Manchester, Cambridge University, MRC Laboratory of Molecular Biology, Kyoto Sangyo University, Imperial College, the Royal Observatory Greenwich, the Open University, and the East Asian Observatory.
A search for exotic things
Venus is often referred to as Earth’s twin, as the neighboring planets are similar in their size, mass, and rocky composition. They also have significant atmospheres, although that is where their similarities end. Where Earth is a habitable world of temperate oceans and lakes, Venus’ surface is a boiling hot landscape, with temperatures reaching 900 degrees Fahrenheit and a stifling air that is drier than the driest places on Earth.
Much of the planet’s atmosphere is also quite inhospitable, suffused with thick clouds of sulfuric acid, and cloud droplets that are billions of times more acidic than the most acidic environment on Earth. The atmosphere also lacks nutrients that exist in abundance on a planet surface.
“Venus is a very challenging environment for life of any kind,” Seager says.
There is, however, a narrow, temperate band within Venus’ atmosphere, between 48 and 60 kilometers above the surface, where temperatures range from 30 to 200 degrees Fahrenheit. Scientists have speculated, with much controversy, that if life exists on Venus, this layer of the atmosphere, or cloud deck, is likely the only place where it would survive. And it just so happens that this cloud deck is where the team observed signals of phosphine.
“This phosphine signal is perfectly positioned where others have conjectured the area could be habitable,” Petkowski says.
The detection was first made by Greaves and her team, who used the JCMT to zero in on Venus’ atmosphere for patterns of light that could indicate the presence of unexpected molecules and possible signatures of life. When she picked up a pattern that indicated the presence of phosphine, she contacted Sousa-Silva, who has spent the bulk of her career characterizing the stinky, toxic molecule.
Sousa-Silva initially assumed that astronomers could search for phosphine as a biosignature on much farther-flung planets. “I was thinking really far, many parsecs away, and really not thinking literally the nearest planet to us.”
The team followed up Greaves’ initial observation using the more sensitive ALMA observatory, with the help of Anita Richards, of the ALMA Regional Center at the University of Manchester. Those observations confirmed that what Greaves observed was indeed a pattern of light that matched what phosphine gas would emit within Venus’ clouds.
The researchers then used a model of the Venusian atmosphere, developed by Hideo Sagawa of Kyoto Sangyo University, to interpret the data. They found that phosphine on Venus is a minor gas, existing at a concentration of about 20 out of every billion molecules in the atmosphere. Although that concentration is low, the researchers point out that phosphine produced by life on Earth can be found at even lower concentrations in the atmosphere.
The MIT team, led by Bains and Petkowski, used computer models to explore all the possible chemical and physical pathways not associated with life, that could produce phosphine in Venus’ harsh environment. Bains considered various scenarios that could produce phosphine, such as sunlight, surface minerals, volcanic activity, a meteor strike, and lightning. Ranjan along with Paul Rimmer of Cambridge University then modeled how phosphine produced through these mechanisms could accumulate in the Venusian clouds. In every scenario they considered, the phosphine produced would only amount to a tiny fraction of what the new observations suggest is present on Venus’ clouds.
“We really went through all possible pathways that could produce phosphine on a rocky planet,” Petkowski says. “If this is not life, then our understanding of rocky planets is severely lacking.”
A life in the clouds
If there is indeed life in Venus’ clouds, the researchers believe it to be an aerial form, existing only in Venus’ temperate cloud deck, far above the boiling, volcanic surface.
“A long time ago, Venus is thought to have oceans, and was probably habitable like Earth,” Sousa-Silva says. “As Venus became less hospitable, life would have had to adapt, and they could now be in this narrow envelope of the atmosphere where they can still survive. This could show that even a planet at the edge of the habitable zone could have an atmosphere with a local aerial habitable envelope.”
In a separate line of research, Seager and Petkowski have explored the possibility that the lower layers of Venus’ atmosphere, just below the cloud deck, could be crucial for the survival of a hypothetical Venusian biosphere.
“You can, in principle, have a life cycle that keeps life in the clouds perpetually,” says Petkowski, who envisions any aerial Venusian life to be fundamentally different from life on Earth. “The liquid medium on Venus is not water, as it is on Earth.”
Sousa-Silva is now leading an effort with Jason Dittman at MIT to further confirm the phosphine detection with other telescopes. They are also hoping to map the presence of the molecule across Venus’ atmosphere, to see if there are daily or seasonal variations in the signal that would suggest activity associated with life.
“Technically, biomolecules have been found in Venus’ atmosphere before, but these molecules are also associated with a thousand things other than life,” Sousa-Silva says. “The reason phosphine is special is, without life it is very difficult to make phosphine on rocky planets. Earth has been the only terrestrial planet where we have found phosphine, because there is life here. Until now.”
This research was funded, in part, by the Science and Technology Facilities Council, the European Southern Observatory, the Japan Society for the Promotion of Science, the Heising-Simons Foundation, the Change Happens Foundation, the Simons Foundation, and the European Union’s Horizon 2020 research and innovation program.
Astronomers may have found a signature of life on Venus syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Monitoring sleep positions for a healthy rest
MIT researchers have developed a wireless, private way to monitor a person’s sleep postures — whether snoozing on their back, stomach, or sides — using reflected radio signals from a small device mounted on a bedroom wall.
The device, called BodyCompass, is the first home-ready, radio-frequency-based system to provide accurate sleep data without cameras or sensors attached to the body, according to Shichao Yue, who will introduce the system in a presentation at the UbiComp 2020 conference on Sept. 15. The PhD student has used wireless sensing to study sleep stages and insomnia for several years.
“We thought sleep posture could be another impactful application of our system” for medical monitoring, says Yue, who worked on the project under the supervision of Professor Dina Katabi in the MIT Computer Science and Artificial Intelligence Laboratory. Studies show that stomach sleeping increases the risk of sudden death in people with epilepsy, he notes, and sleep posture could also be used to measure the progression of Parkinson’s disease as the condition robs a person of the ability to turn over in bed.
In the future, people might also use BodyCompass to keep track of their own sleep habits or to monitor infant sleeping, Yue says: “It can be either a medical device or a consumer product, depending on needs.”
Other authors on the conference paper, published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, include graduate students Yuzhe Yang and Hao Wang, and Katabi Lab affiliate Hariharan Rahul. Katabi is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.
Restful reflections
BodyCompass works by analyzing the reflection of radio signals as they bounce off objects in a room, including the human body. Similar to a Wi-Fi router attached to the bedroom wall, the device sends and collects these signals as they return through multiple paths. The researchers then map the paths of these signals, working backward from the reflections to determine the body’s posture.
For this to work, however, the scientists needed a way to figure out which of the signals were bouncing off the sleeper’s body, and not bouncing off the mattress or a nightstand or an overhead fan. Yue and his colleagues realized that their past work in deciphering breathing patterns from radio signals could solve the problem.
Signals that bounce off a person’s chest and belly are uniquely modulated by breathing, they concluded. Once that breathing signal was identified as a way to “tag” reflections coming from the body, the researchers could analyze those reflections compared to the position of the device to determine how the person was lying in bed. (If a person was lying on her back, for instance, strong radio waves bouncing off her chest would be directed at the ceiling and then to the device on the wall.) “Identifying breathing as coding helped us to separate signals from the body from environmental reflections, allowing us to track where informative reflections are,” Yue says.
Reflections from the body are then analyzed by a customized neural network to infer how the body is angled in sleep. Because the neural network defines sleep postures according to angles, the device can distinguish between a sleeper lying on the right side from one who has merely tilted slightly to the right. This kind of fine-grained analysis would be especially important for epilepsy patients for whom sleeping in a prone position is correlated with sudden unexpected death, Yue says.
BodyCompass has some advantages over other ways of monitoring sleep posture, such as installing cameras in a person’s bedroom or attaching sensors directly to the person or their bed. Sensors can be uncomfortable to sleep with, and cameras reduce a person’s privacy, Yue notes. “Since we will only record essential information for detecting sleep posture, such as a person’s breathing signal during sleep,” he says, “it is nearly impossible for someone to infer other activities of the user from this data.”
An accurate compass
The research team tested BodyCompass’ accuracy over 200 hours of sleep data from 26 healthy people sleeping in their own bedrooms. At the start of the study, the subjects wore two accelerometers (sensors that detect movement) taped to their chest and stomach, to train the device’s neural network with “ground truth” data on their sleeping postures.
BodyCompass was most accurate — predicting the correct body posture 94 percent of the time — when the device was trained on a week’s worth of data. One night’s worth of training data yielded accurate results 87 percent of the time. BodyCompass could achieve 84 percent accuracy with just 16 minutes’ worth of data collected, when sleepers were asked to hold a few usual sleeping postures in front of the wireless sensor.
Along with epilepsy and Parkinson’s disease, BodyCompass could prove useful in treating patients vulnerable to bedsores and sleep apnea, since both conditions can be alleviated by changes in sleeping posture. Yue has his own interest as well: He suffers from migraines that seem to be affected by how he sleeps. “I sleep on my right side to avoid headache the next day,” he says, “but I’m not sure if there really is any correlation between sleep posture and migraines. Maybe this can help me find out if there is any relationship.”
For now, BodyCompass is a monitoring tool, but it may be paired someday with an alert that can prod sleepers to change their posture. “Researchers are working on mattresses that can slowly turn a patient to avoid dangerous sleep positions,” Yue says. “Future work may combine our sleep posture detector with such mattresses to move an epilepsy patient to a safer position if needed.”
Monitoring sleep positions for a healthy rest syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
As information flows through brain’s heirarchy, higher regions use higher-frequency waves
To produce your thoughts and actions, your brain processes information in a hierarchy of regions along its surface, or cortex, ranging from “lower” areas that do basic parsing of incoming sensations to “higher” executive regions that formulate your plans for employing that newfound knowledge. In a new study, MIT neuroscientists seeking to explain how this organization emerges report two broad trends: In each of three distinct regions, information encoding or its inhibition was associated with a similar tug of war between specific brain wave frequency bands, and the higher a region’s status in the hierarchy, the higher the peak frequency of its waves in each of those bands.
By making and analyzing measurements of thousands of neurons and surrounding electric fields in three cortical regions in animals, the team’s new study in the Journal of Cognitive Neuroscience provides a unifying view of how brain waves, which are oscillating patterns of the activity of brain cells, may control the flow of information throughout the cortex.
“When you look at prior studies you see examples of what we found in many regions, but they are all found in different ways in different experiments,” says Earl Miller, the Picower Professor of Neuroscience in The Picower Institute for Learning and Memory at MIT and senior author of the study. “We wanted to obtain an overarching picture, so that’s what we did. We addressed the question of what does this look like all over the cortex.”
Adds co-first author Mikael Lundqvist of Stockholm University, formerly a postdoc at MIT: “Many, many studies have looked at how synchronized the phases of a particular frequency are between cortical regions. It has become a field by itself, because synchrony will impact the communication between regions. But arguably even more important would be if regions communicate at different frequencies altogether. Here we find such a systematic shift in preferred frequencies across regions. It may have been suspected by piecing together earlier studies, but as far as I know hasn’t been shown directly before. It is a simple, but potentially very fundamental, observation.”
The paper’s other first author is Picower Institute postdoc Andre Bastos.
To make their observations, the team gave animals the task of correctly distinguishing an image they had just seen — a simple feat of visual working memory. As the animals played the game, the scientists measured the individual spiking activity of hundreds of neurons in each animal in three regions at the bottom, middle, and top of the task’s cortical hierarchy — the visual cortex, the parietal cortex, and the prefrontal cortex. They simultaneously tracked the waves produced by this activity.
In each region, they found that when an image was either being encoded (when it was first presented) or recalled (when working memory was tested), the power of theta and gamma frequency bands of brain waves would increase in bursts and power in alpha and beta bands would decrease. When the information had to be held in mind, for instance in the period between first sight and the test, theta and gamma power went down and alpha and beta power went up in bursts. This functional “push/pull” sequence between these frequency bands has been shown in several individual regions, including the motor cortex, Miller said, but not often simultaneously across multiple regions in the course of the same task.
The researchers also observed that the bursts of theta and gamma power were closely associated with neural spikes that encoded information about the images. Alpha and beta power bursts, meanwhile, were anti-correlated with that same spiking activity.
While this rule applied across all three regions, a key difference was that each region employed a distinct peak within each frequency band. While the visual cortex beta band, for instance, peaked at 11 Hz, parietal beta peaked at 15 Hz, and prefrontal beta peaked at 19 Hz. Meanwhile, visual cortex gamma occurred at 65 Hz, parietal gamma topped at 72 Hz, and prefrontal gamma at 80 Hz.
“As you move from the back of the brain to the front, all the frequencies get a little higher,” Miller says.
While both main trends in the study — the inverse relationships between frequency bands and the systematic rise in peak frequencies within each band — were both consistently observed and statistically significant, they only show associations with function, not causality. But the researchers said they are consistent with a model in which alpha and beta alternately inhibit, or release, gamma to control the encoding of information — a form of top-down control of sensory activity.
Meanwhile, they hypothesize that the systematic increase in peak frequencies up the hierarchy could serve multiple functions. For instance, if waves in each frequency band carry information, then higher regions would sample at a faster frequency to provide more fine-grained sampling of the raw input coming from lower regions. Moreover, faster frequencies are more effective at entraining those same frequencies in other regions, giving higher regions an effective way of controlling activity in lower ones.
“The increased frequency in the oscillatory rhythms may help sculpt information flow in the cortex,” the authors wrote.
The study was supported by the U.S. National Institutes of Health, the Office of Naval Research, The JPB Foundation, the Swedish Research Council, and the Brain and Behavior Research Foundation.
As information flows through brain’s heirarchy, higher regions use higher-frequency waves syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
MIT-led team to develop software to help forecast space storms
On a moonless night on Aug. 28, 1859, the sky began to bleed. The phenomenon behind the northern lights had gone global: an aurora stretching luminous, rainbow fingers across time zones and continents illuminated the night sky with an undulating backdrop of crimson. From New England to Australia, people stood in the streets looking up with admiration, inspiration, and fear as the night sky shimmered in Technicolor. But the beautiful display came with a cost. The global telegraph system — which at the time was responsible for nearly all long-distance communication — experienced widespread disruption. Some telegraph operators experienced electric shocks while sending and receiving messages; others witnessed sparks flying from cable pylons. Telegraph transmissions were halted for days.  
The aurora and the damage that followed were later attributed to a geomagnetic storm caused by a series of coronal mass ejections (CMEs) that burst from the sun’s surface, raced across the solar system, and barraged our atmosphere with magnetic solar energy, wreaking havoc on the electricity that powered the telegraph system. Although we no longer rely on the global telegraph system to stay connected around the world, experiencing a geomagnetic storm on a similar scale in today’s world would still be catastrophic. Such a storm could cause worldwide blackouts, massive network failures, and widespread damage to the satellites that enable GPS and telecommunication — not to mention the potential threat to human health from increased levels of radiation. Unlike storms on Earth, solar storms’ arrival and intensity can be difficult to predict. Without a better understanding of space weather, we might not even see the next great solar storm coming until it’s too late.
To advance our ability to forecast space weather like we do on weather Earth, Richard Linares, an assistant professor in the Department of Aeronautics and Astronautics (AeroAstro) at MIT, is leading a multidisciplinary team of researchers to develop software that can effectively address this challenge. With better models, we can use historical observational data to better predict the impact of space weather events like CMEs, solar wind, and other space plasma phenomena as they interact with our atmosphere. Under the Space Weather with Quantified Uncertainties (SWQU) program, a partnership between the U.S. National Science Foundation (NSF) and NASA, the team was awarded a $3 million grant for their proposal “Composable Next Generation Software Framework.”
“By bringing together experts in geospace sciences, uncertainty quantification, software development, management, and sustainability, we hope to develop the next generation of software for space weather modeling and prediction,” says Linares. “Improving space weather predictions is a national need, and we saw a unique opportunity at MIT to combine the expertise we have across campus to solve this problem.”
Linares’ MIT collaborators include Philip Erickson, assistant director at MIT Haystack Observatory and head of Haystack’s atmospheric and geospace sciences group; Jaime Peraire, the H.N. Slater Professor of Aeronautics and Astronautics; Youssef Marzouk, professor of aeronautics and astronautics; Ngoc Cuong Nguyen, a research scientist in AeroAstro; Alan Edelman, professor of applied mathematics; and Christopher Rackauckas, instructor in the Department of Mathematics. External collaborators include Aaron Ridley (University of Michigan) and Boris Kramer (University of California at San Diego). Together, the team will focus on resolving this gap by creating a model-focused composable software framework that allows a wide variety of observation data collected across the world to be ingested into a global model of the ionosphere/thermosphere system. 
“MIT Haystack research programs include a focus on conditions in near-Earth space, and our NSF-sponsored Madrigal online distributed database provides the largest single repository of ground-based community data on space weather and its effects in the atmosphere using worldwide scientific observations. This extensive data includes ionospheric remote sensing information on total electron content (TEC), spanning the globe on a nearly continuous basis and calculated from networks of thousands of individual global navigation satellite system community receivers,” says Erickson. “TEC data, when analyzed jointly with results of next-generation atmosphere and magnetosphere modeling systems, provides a key future innovation that will significantly improve human understanding of critically important space weather effects.”
The project aims to create a powerful, flexible software platform using cutting-edge computational tools to collect and analyze huge sets of observational data that can be easily shared and reproduced among researchers. The platform will also be designed to work even as computer technology rapidly advances and new researchers contribute to the project from new places, using new machines. Using Julia, a high-performance programming language developed by Edelman at MIT, researchers from all over the world will be able to tailor the software for their own purposes to contribute their data without having to rewrite the program from scratch.
“I’m very excited that Julia, already fast becoming the language of scientific machine learning, and a great tool for collaborative software, can play a key role in space weather applications,” says Edelman. 
According to Linares, the composable software framework will serve as a foundation that can be expanded and improved over time, growing both the space weather prediction capabilities and the space weather modeling community itself.
The MIT-led project was one of six projects selected for three-year grant awards under the SWQU program. Motivated by the White House National Space Weather Strategy and Action Plan and the National Strategic Computing Initiative, the goal of the SWQU program is to bring together teams from across scientific disciplines to advance the latest statistical analysis and high-performance computing methods within the field of space weather modeling.
“One key goal of the SWQU program is development of sustainable software with built-in capability to evaluate likelihood and magnitude of electromagnetic geospace disturbances based on sparse observational data,” says Vyacheslav Lukin, NSF program director in the Division of Physics. “We look forward to this multidisciplinary MIT-led team laying the foundations for such development to enable advances that will transform our future space weather forecasting capabilities.”
MIT-led team to develop software to help forecast space storms syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Modeling the impact of testing, tracing, and quarantine
Testing, contact tracing, and quarantining infected people are all tools in the effort to mitigate the spread of Covid-19. So are mask-wearing and social distancing. But what impact does each have? A study co-authored by MIT researchers finds that robust testing, contact tracing, and quarantining by household can keep cases within the capacity of the health-care system — preventing a “second wave” — while allowing for the reopening of some economic activities.
The paper, published Aug. 5 in Nature Human Behaviour, details a novel model that integrates anonymized, real-time mobility data with census and demographic data to map Covid-19 transmission in the Boston, Massachusetts area. The authors include Esteban Moro, a visiting research scientist in the MIT Media Lab and MIT Connection Science, and Alex “Sandy” Pentland, director of MIT Connection Science and a professor in the Media Lab and the Institute for Data, Systems, and Society (IDSS).
This research sheds new light on possible pitfalls and solutions as cities look to lift restrictions that have been in place throughout the summer in many locations. Using data from approximately 85,000 people in the greater Boston area, combined with known information about Covid-19 transmission rates, duration of stages, and other data points, the authors’ model forecasts the number of new cases and hospitalizations under various scenarios of lifted restrictions.
“If we want to re-scale our lives, economy, and cities, we need to understand better how the infection is spreading across people and communities,” says Moro. “Shutting down the whole economy and our cities because of a second wave might not be needed if we include accurate information about how people are behaving, moving, shopping, et cetera in our society.”
In establishing a baseline, the study found that unmitigated lifting of restrictions would likely lead to a “second wave” that would quickly overwhelm Boston’s health-care facilities, with peak of daily incidence of 25.2 newly infected individuals per 1,000 people, leading to a need for about 12 times the available intensive-care unit (ICU) beds.
A second scenario, referred to as LIFT, assumed an additional eight weeks of stay-at-home order, followed by another four of partial reopening, including work and community spaces, but not full reopening of restaurants and other spaces with mass social gatherings. After the total 12-week period, there would be a full lifting of all restrictions. In the LIFT scenario, the modeled impact was still well beyond the capacity of health-care facilities, with a need for over nine times the ICU beds available at the peak of the likely second wave.
It might be that only a safe, effective, and widely distributed vaccine will allow the world to return to life as usual. However, the authors propose a third scenario — called LET, short for Lift and Enhanced Tracing — that keeps cases and hospitalizations manageable while allowing for a wide return to work and social activity.
The LET scenario involves the same LIFT measures, but adds robust testing, contact tracing of symptomatic people, and quarantining of all household members of people who came in close contact with someone who tests positive for the virus. After lifting restrictions, at rates of 50 percent detection of positive cases within two days of onset of symptoms, tracing of 40 percent of contacts, and quarantine of all household members of those contacts, the model shows just 0.29 people per thousand in hospitals per day, compared with more than five per day under LIFT measures alone and more than seven under the unmitigated scenario. ICU beds would be more than adequate at all times under this scenario.
The advantage of whole-household quarantine is that it simplifies contact tracing, working at the level of small groups of people, rather than individuals. Followup calls to check for compliance would also be streamlined. Furthermore, the model assumes no additional precautions, such as masks and social distancing. Therefore, it is expected that new cases and hospitalizations could be even lower if people were to continue some of the practices that have helped combat the spread of Covid-19 thus far.
This approach is not without sacrifice. Quarantining full households presents unique challenges — it might be hard for quarantined families to obtain necessities, and quarantining together with others with known risk of infection may not be desirable. The study notes that at the peak, with 40 percent contact tracing, as many as 9 percent of all people in the city could be under quarantine. However, this number would gradually decline to around 3 percent. The total number in quarantine could be further reduced if testing ramps up more significantly. The authors suggest that the trade-off of higher numbers of people in quarantine compared with the massively disruptive long-term social isolation policies that would otherwise be needed to keep new infections manageable is well worth it. Life could return to some degree of normalcy, and the economy could begin to recover.
Since the study was carried out, Massachusetts has moved toward a manual tracing strategy in which thousands of people have been hired to trace potential infections. Moro explains that this could work if the number of cases is small and controlled, but it might be insufficient if the number of cases scales up. He also notes that hiring contact tracers has been problematic. He suggests a possible solution to deal with sudden growth in the number of cases: combine manual and digital contact tracing via an app.
The model used in the study will continue to be developed and enhanced, and the authors plan to examine other cities beyond Boston. They will use real-time behavior data to investigate how infection is actually propagating and detect when, where, and why spreading events are happening.
MIT Connection Science is a research group hosted by the Sociotechnical Systems Research Center, a part of IDSS.
Modeling the impact of testing, tracing, and quarantine syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
New gene regulation model provides insight into brain development
In every cell, RNA-binding proteins (RBPs) help tune gene expression and control biological processes by binding to RNA sequences. Researchers often assume that individual RBPs latch tightly to just one RNA sequence. For instance, an essential family of RBPs, the Rbfox family, was thought to bind one particular RNA sequence alone. However, it’s becoming increasingly clear that this idea greatly oversimplifies Rbfox’s vital role in development.
Members of the Rbfox family are among the best-studied RBPs and have been implicated in mammalian brain, heart, and muscle development since their discovery 25 years ago. They influence how RNA transcripts are “spliced” together to form a final RNA product, and have been associated with disorders like autism and epilepsy. But this family of RBPs is compelling for another reason as well: until recently, it was considered a classic example of predictable binding.
More often than not, it seemed, Rbfox proteins bound to a very specific sequence, or motif, of nucleotide bases, “GCAUG.” Occasionally, binding analyses hinted that Rbfox proteins might attach to other RNA sequences as well, but these findings were usually discarded. Now, a team of biologists from MIT has found that Rbfox proteins actually bind less tightly — but no less frequently — to a handful of other RNA nucleotide sequences besides GCAUG. These so-called “secondary motifs” could be key to normal brain development, and help neurons grow and assume specific roles.
“Previously, possible binding of Rbfox proteins to atypical sites had been largely ignored,” says Christopher Burge, professor of biology and the study’s senior author. “But we’ve helped demonstrate that these secondary motifs form their own separate class of binding sites with important physiological functions.”
Graduate student Bridget Begg is the first author of the study, published Aug. 17 in Nature Structural & Molecular Biology.
“Two-wave” regulation
After the discovery that GCAUG was the primary RNA binding site for mammalian Rbfox proteins, researchers characterized its binding in living cells using a technique called CLIP (crosslinking-immunoprecipitation). However, CLIP has several limitations. For example, it can indicate where a protein is bound, but not how much protein is bound there. It’s also hampered by some technical biases, including substantial false-negative and false-positive results.
To address these shortcomings, the Burge lab developed two complementary techniques to better quantify protein binding, this time in a test tube: RBNS (RNA Bind-n-Seq), and later, nsRBNS (RNA Bind-n-Seq with natural sequences), both of which incubate an RBP of interest with a synthetic RNA library. First author Begg performed nsRBNS with naturally-occurring mammalian RNA sequences, and identified a variety of intermediate-affinity secondary motifs that were bound in the absence of GCAUG. She then compared her own data with publicly-available CLIP results to examine the “aberrant” binding that had often been discarded, demonstrating that signals for these motifs existed across many CLIP datasets.
To probe the biological role of these motifs, Begg performed reporter assays to show that the motifs could regulate Rbfox’s RNA splicing behavior. Subsequently, computational analyses by Begg and co-author Marvin Jens using mouse neuronal data established a handful of secondary motifs that appeared to be involved in neuronal differentiation and cellular diversification.
Based on analyses of these key secondary motifs, Begg and colleagues devised a “two-wave” model. Early in development, they believe, Rbfox proteins bind predominantly to high-affinity RNA sequences like GCAUG, in order to tune gene expression. Later on, as the Rbfox concentration increases, those primary motifs become fully occupied and Rbfox additionally binds to the secondary motifs. This results in a second wave of Rbfox-regulated RNA splicing with a different set of genes.
Begg theorizes that the first wave of Rbfox proteins binds GCAUG sequences early in development, and she showed that they regulate genes involved in nerve growth, like cytoskeleton and membrane organization. The second wave appears to help neurons establish electrical and chemical signaling. In other cases, secondary motifs might help neurons specialize into different subtypes with different jobs.
John Conboy, a molecular biologist at Lawrence Berkeley National Laboratory and an expert in Rbfox binding, says the Burge lab’s two-wave model clearly shows how a single RBP can bind different RNA sequences — regulating splicing of distinct gene sets and influencing key processes during brain development. “This quantitative analysis of RNA-protein interactions, in a field that is often semi-quantitative at best, contributes fascinating new insights into the role of RNA splicing in cell type specification,” he says.
A binding spectrum
The researchers suspect that this two-wave model is not unique to Rbfox. “This is probably happening with many different RBPs that regulate development and other dynamic processes,” Burge says. “In the future, considering secondary motifs will help us to better understand developmental disorders and diseases, which can occur when RBPs are over- or under-expressed.”
Begg adds that secondary motifs should be incorporated into computer models that predict gene expression, in order to probe cellular behavior. “I think it’s very exciting that these more finely-tuned developmental processes, like neuronal differentiation, could be regulated by secondary motifs,” she says.
Both Begg and Burge agree it’s time to consider the entire spectrum of Rbfox binding, which are highly influenced by factors like protein concentration, binding strength, and timing. According to Begg, “Rbfox regulation is actually more complex than we sometimes give it credit for.”
This research was funded by the EMBO Long Term Fellowship and by a grant from the National Institutes of Health.
New gene regulation model provides insight into brain development syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Helping companies prioritize their cybersecurity investments
One reason that cyberattacks have continued to grow in recent years is that we never actually learn all that much about how they happen. Companies fear that reporting attacks will tarnish their public image, and even those who do report them don’t share many details because they worry that their competitors will gain insight into their security practices. 
“It’s really a nice gift that we’ve given to cyber-criminals,” says Taylor Reynolds, technology policy director at MIT’s Internet Policy Research Initiative (IPRI). “In an ideal world, these attacks wouldn’t happen over and over again, because companies would be able to use data from attacks to develop quantitative measurements of the security risk so that we could prevent such incidents in the future.”
In an economy where most industries are tightening their belts, many organizations don’t know which types of attacks lead to the largest financial losses, and therefore how to best deploy scarce security resources. 
But a new platform from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to change that, quantifying companies’ security risk without requiring them to disclose sensitive data about their systems to the research team, much less their competitors.
Developed by Reynolds alongside economist Andrew Lo and cryptographer Vinod Vaikuntanathan, the platform helps companies do multiple things:
quantify how secure they are;
understand how their security compares to peers; and
evaluate whether they’re spending the right amount of money on security, and if and how they should change their particular security priorities.
The team received internal data from seven large companies that averaged 50,000 employees and annual revenues of $24 billion. By securely aggregating 50 different security incidents that took place at the companies, the researchers were able to analyze which specific steps were not taken that could have prevented them. (Their analysis used a well-established set of nearly 200 security actions referred to as the Center for Internet Security Sub-Controls.) 
“We were able to paint a really thorough picture in terms of which security failures were costing companies the most money,” says Reynolds, who co-authored a related paper with professors Lo and Vaikuntanathan, MIT graduate student Leo de Castro, Principal Research Scientist Daniel J. Weitzner, PhD student Fransisca Susan, and graduate student Nicolas Zhang. “If you’re a chief information security officer at one of these organizations, it can be an overwhelming task to try to defend absolutely everything. They need to know where they should direct their attention.”
The team calls their platform “SCRAM,” for “Secure Cyber Risk Aggregation and Measurement.” Among other findings, they determined that the three following security vulnerabilities had the largest total losses, each in excess of $1 million:
Failures in preventing malware attacks
Malware attacks, like the one last month that reportedly forced the wearables company Garmin to pay a $10 million ransom, are still a tried-and-true method of gaining control of valuable consumer data. Reynolds says that companies continue to struggle to prevent such attacks, relying on regularly backing up their data and reminding their employees not to click on suspicious emails. 
Communication over unauthorized ports 
Curiously, the team found that every firm in their study said they had, in fact, implemented the security measure of blocking access to unauthorized ports — the digital equivalent of companies locking all their doors. Even still, attacks that involved gaining access to these ports accounted for a large number of high-cost losses. 
“Losses can arise even when there are defenses that are well-developed and understood,” says Weitzner, who also serves as director of MIT IPRI. “It’s important to recognize that improving common existing defenses should not be neglected in favor of expanding into new areas of defense.”
Failures in log management for security incidents 
Every day companies amass detailed “logs” denoting activity within their systems. Senior security officers often turn to these logs after an attack to audit the incident and see what happened. Reynolds says that there are many ways that companies could be using machine learning and artificial intelligence more efficiently to help understand what’s happening — including, crucially, during or even before a security attack. 
Two other key areas that warrant further analysis include taking inventory of hardware so that only authorized devices are given access, as well as boundary defenses like firewalls and proxies that aim to control the flow of traffic through network borders. 
The team developed their data aggregation platform in conjunction with MIT cryptography experts, using an existing method called multi-party computation (MPC) that allows them to perform calculations on data without themselves being able to read or unlock it. After computing its anonymized findings, the SCRAM system then asks each contributing company to help it unlock only the answer using their own secret cryptographic key.
“The power of this platform is that it allows firms to contribute locked data that would otherwise be too sensitive or risky to share with a third party,” says Reynolds.
As a next step, the researchers plan to expand the pool of participating companies, with representation from a range of different sectors that include electricity, finance, and biotech. Reynolds says that if the team can gather data from upwards of 70 or 80 companies, they’ll be able to do something unprecedented: put an actual dollar figure on the risk of particular defenses failing.
The project was a cross-campus effort involving affiliates at IPRI, CSAIL’s Theory of Computation group, and the MIT Sloan School of Management. It was funded by the Hewlett Foundation and CSAIL’s Financial Technology industry initiative (“FinTech@CSAIL”). 
Helping companies prioritize their cybersecurity investments syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
A new way to make bacteria more sensitive to antibiotics
Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have discovered a new way to reverse antibiotic resistance in some bacteria using hydrogen sulfide (H2S).
Growing antimicrobial resistance is a major threat for the world with a projected 10 million deaths each year by 2050 if no action is taken. The World Health Organization also warns that by 2030, drug-resistant diseases could force up to 24 million people into extreme poverty and cause catastrophic damage to the world economy.
In most bacteria studied, the production of endogenous H2S has been shown to cause antibiotic tolerance, so H2S has been speculated as a universal defense mechanism in bacteria against antibiotics.
A team at SMART’s Antimicrobial Resistance (AMR) Interdisciplinary Research Group (IRG) tested that theory by adding H2S-releasing compounds to Acinetobacter baumannii — a pathogenic bacterium that does not produce H2S on its own. They found that rather than causing antibiotic tolerance, exogenous H2S sensitized the A. baumannii to multiple antibiotic classes. It was even able to reverse acquired resistance in A. baumannii to gentamicin, a very common antibiotic used to treat several types of infections.
The results of their study, supported by the Singapore National Medical Research Council’s Young Investigator Grant, are discussed in a paper titled “Hydrogen sulfide sensitizes Acinetobacter baumannii to killing by antibiotics,” published in the prestigious journal Frontiers in Microbiology.
“Until now, hydrogen sulfide was regarded as a universal bacterial defense against antibiotics,” says Wilfried Moreira, the corresponding author of the paper and principal investigator at SMART’s AMR IRG. “This is a very exciting discovery because we are the first to show that H2S can, in fact, improve sensitivity to antibiotics, and even reverse antibiotic resistance in bacteria that do not naturally produce the agent.”
While the study focused on the effects of exogenous H2S on A. baumannii, the scientists believe the results will be mimicked in all bacteria that do not naturally produce H2S.
“Acinetobacter baumannii is a critically important antibiotic-resistant pathogen that poses a huge threat to human health,” says Say Yong Ng, lead author of the paper and laboratory technologist at SMART AMR. “Our research has found a way to make the deadly bacteria and others like it more sensitive to antibiotics, and can provide a breakthrough in treating many drug-resistant infections.”
The team plans to conduct further studies to validate these exciting findings in pre-clinical models of infection, as well as extending them to other bacteria that do not produce H2S.
SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in the Campus for Research Excellence and Technological Enterprise (CREATE) developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore. Cutting-edge research projects in areas of interest to both Singapore and MIT are undertaken at SMART. SMART currently comprises an Innovation Center and five IRGs: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems. SMART research is funded by the NRF under the CREATE program.
The AMR IRG is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, they tackle AMR head-on by developing multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections.
A new way to make bacteria more sensitive to antibiotics syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
Scientists discover new rules about “runaway” transcription
On the evolutionary tree, humans diverged from yeast roughly 1 billion years ago. By comparison, two seemingly similar species of bacteria, Escherichia coli and Bacillus subtilis, have been evolving apart for roughly twice as long. In other words, walking, talking bipeds are closer on the tree of life to single-celled fungi than these two bacteria are to one another. In fact, it’s becoming increasingly clear that what is true of one bacterial type may not be true of another — even when it comes down to life’s most basic biological pathways.
E. coli has served as a model organism in scientific research for over a century, and helped researchers parse many fundamental processes, including gene expression. In these bacteria, as one molecular machine, the RNA polymerase, moves along the DNA transcribing it into RNA, it is followed in close pursuit by a second molecular machine, the ribosome, which translates the RNA into proteins. This “coupled” transcription-translation helps monitor and tune RNA output, and is considered a hallmark of bacteria.
However, an interdisciplinary team of biologists and physicists recently showed that the B. subtilis bacterium employs a different set of rules. Rather than working in tandem with the ribosome, the polymerase in B. subtilis speeds ahead. This system of “runaway” transcription creates alternative rules for RNA quality control, and provides insights into the sheer diversity of bacterial species.
“Generations of researchers, including myself, were taught that coupled transcription-translation is fundamental to bacterial gene expression,” says Gene-Wei Li, an associate professor of biology and senior author of the study. “But our very precise, quantitative measurements have overturned that long-held view, and this study could be just the tip of the iceberg.”
Grace Johnson, a graduate student in the Department of Biology, and Jean-Benoît Lalanne, a graduate student in the Department of Physics, are the lead authors on the paper, which appeared in Nature on Aug. 26.
A curious clue
In 2018, Lalanne developed an experimental technique to measure the boundaries of RNA transcripts. When DNA is transcribed into RNA, the resulting transcripts are generally longer than the DNA coding sequence because they also have to include an extra bit at the end to signal the polymerase to stop. In B. subtilis, Lalanne noticed there simply wasn’t enough space between the ends of the coding sequences and the ends of the RNA transcripts — the extra code was too short for both the polymerase and the ribosome to fit at the same time. In this bacterium, coupled transcription-translation didn’t seem possible.
“It was a pretty weird observation,” Lalanne recalls. “It just didn’t square up with the accepted dogma.”
To delve further into these puzzling observations, Johnson measured the speeds of the RNA polymerase and ribosome in B. subtilis. She was surprised to find that they were moving at very different rates: The polymerase was going roughly twice as fast as the ribosome.
During coupled transcription-translation in E. coli, the ribosome is so closely associated with the RNA polymerase that it can control when transcription terminates. If the RNA encodes a “premature” signal for the polymerase to stop transcribing, the nearby ribosome can mask it and spur the polymerase on. However, if something goes awry and the ribosome is halted too far behind the polymerase, a protein called Rho can intervene to terminate transcription at these premature sites, halting the production of these presumably non-functional transcripts.
However, in B. subtilis, the ribosome is always too far behind the polymerase to exert its masking effect. Instead, Johnson found that Rho recognizes signals encoded in the RNA itself. This allows Rho to prevent production of select RNAs while ensuring it doesn’t suppress all RNAs. However, these specific signals also mean Rho likely has a more limited role in B. subtilis than it does in E. coli.
A family trait
To gauge how common runaway transcription is, Lalanne created algorithms that sifted through genomes from over 1,000 bacterial species to identify the ends of transcripts. In many cases, there was not enough space at the end of the transcripts for both the RNA polymerase and the ribosome to fit, indicating that more than 200 additional bacteria also rely on runaway transcription.  
“It was striking to see just how widespread this phenomenon is,” Li says. “It raises the question: How much do we really know about these model organisms we’ve been studying for so many years?”
Carol Gross, a professor in the Department of Microbiology and Immunology at the University of California at San Francisco who was not involved with the study, refers to the work as a “tour de force.”
“Gene-Wei Li and colleagues show transcription-translation coupling, thought to be a foundational feature of bacterial gene regulation, is not universal,” she says. “Instead, runaway transcription leads to a host of alternative regulatory strategies, thereby opening a new frontier in our study of bacterial strategies to thrive in varied environments.”
As researchers widen their experimental radius to include more types of bacteria, they are learning more about the basic biological processes underlying these microorganisms — with implications for those that take up residence in the human body, from helpful gut microbes to noxious pathogens.
“We’re beginning to realize that bacteria can have distinct ways of regulating gene expression and responding to environmental stress,” Johnson says. “It just shows how much interesting biology is left to uncover as we study increasingly diverse bacteria.”
This research was supported by the National Institutes of Health, Pew Biomedical Scholars Program, Sloan Research Fellowship, Searle Scholars Program, Smith Family Award for Excellence in Biomedical Research, National Science Foundation, Natural Sciences and Engineering Research Council of Canada, and Howard Hughes Medical Institute.
Scientists discover new rules about “runaway” transcription syndicated from https://osmowaterfilters.blogspot.com/
0 notes
dorcasrempel · 5 years ago
Text
A “bang” in LIGO and Virgo detectors signals most massive gravitational-wave source yet
For all its vast emptiness, the universe is humming with activity in the form of gravitational waves. Produced by extreme astrophysical phenomena, these reverberations ripple forth and shake the fabric of space-time, like the clang of a cosmic bell.
Now researchers have detected a signal from what may be the most massive black hole merger yet observed in gravitational waves. The product of the merger is the first clear detection of an “intermediate-mass” black hole, with a mass between 100 and 1,000 times that of the sun.
They detected the signal, which they have labeled GW190521, on May 21, 2019, with the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO), a pair of identical, 4-kilometer-long interferometers in the United States; and Virgo, a 3-kilometer-long detector in Italy.
The signal, resembling about four short wiggles, is extremely brief in duration, lasting less than one-tenth of a second. From what the researchers can tell, GW190521 was generated by a source that is roughly 5 gigaparsecs away, when the universe was about half its age, making it one of the most distant gravitational-wave sources detected so far.
As for what produced this signal, based on a powerful suite of state-of-the-art computational and modeling tools, scientists think that GW190521 was most likely generated by a binary black hole merger with unusual properties.
Almost every confirmed gravitational-wave signal to date has been from a binary merger, either between two black holes or two neutron stars. This newest merger appears to be the most massive yet, involving two inspiraling black holes with masses about 85 and 66 times the mass of the sun.
The LIGO-Virgo team has also measured each black hole’s spin and discovered that as the black holes were circling ever closer together, they could have been spinning about their own axes, at angles that were out of alignment with the axis of their orbit. The black holes’ misaligned spins likely caused their orbits to wobble, or “precess,” as the two Goliaths spiraled toward each other.
The new signal likely represents the instant that the two black holes merged. The merger created an even more massive black hole, of about 142 solar masses, and released an enormous amount of energy, equivalent to around 8 solar masses,      spread across the universe in the form of gravitational waves.
“This doesn’t look much like a chirp, which is what we typically detect,” says Virgo member Nelson Christensen, a researcher at the French National Centre for Scientific Research (CNRS), comparing the signal to LIGO’s first detection of gravitational waves in 2015. “This is more like something that goes ‘bang,’ and it’s the most massive signal LIGO and Virgo have seen.”
The international team of scientists, who make up the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration, have reported their findings in two papers published today. One, appearing in Physical Review Letters, details the discovery, and the other, in The Astrophysical Journal Letters, discusses the signal’s physical properties and astrophysical implications.
“LIGO once again surprises us not just with the detection of black holes in sizes that are difficult to explain, but doing it using techniques that were not designed specifically for stellar mergers,” says Pedro Marronetti, program director for gravitational physics at the National Science Foundation. “This is of tremendous importance since it showcases the instrument’s ability to detect signals from completely unforeseen astrophysical events. LIGO shows that it can also observe the unexpected.”
In the mass gap
The uniquely large masses of the two inspiraling black holes, as well as the final black hole, raise a slew of questions regarding their formation.
All of the black holes observed to date fit within either of two categories: stellar-mass black holes, which measure from a few solar masses up to tens of solar masses and are thought to form when massive stars die; or supermassive black holes, such as the one at the center of the Milky Way galaxy, that are from hundreds of thousands, to billions of times that of our sun.
However, the final 142-solar-mass black hole produced by the GW190521 merger lies within an intermediate mass range between stellar-mass and supermassive black holes — the first of its kind ever detected.
The two progenitor black holes that produced the final black hole also seem to be unique in their size. They’re so massive that scientists suspect one or both of them may not have formed from a collapsing star, as most stellar-mass black holes do.
According to the physics of stellar evolution, outward pressure from the photons and gas in a star’s core support it against the force of gravity pushing inward, so that the star is stable, like the sun. After the core of a massive star fuses nuclei as heavy as iron, it can no longer produce enough pressure to support the outer layers. When this outward pressure is less than gravity, the star collapses under its own weight, in an explosion called a core-collapse supernova, that can leave behind a black hole.
This process can explain how stars as massive as 130 solar masses can produce black holes that are up to 65 solar masses. But for heavier stars, a phenomenon known as “pair instability” is thought to kick in. When the core’s photons become extremely energetic, they can morph into an electron and antielectron pair. These pairs generate less pressure than photons, causing the star to become unstable against gravitational collapse, and the resulting explosion is strong enough to leave nothing behind. Even more massive stars, above 200 solar masses, would      eventually collapse directly into a black hole of at least 120 solar masses. A collapsing star, then, should not be able to produce a black hole between approximately 65 and 120 solar masses — a range that is known as the “pair instability mass gap.”
But now, the heavier of the two black holes that produced the GW190521 signal, at 85 solar masses, is the first so far detected within the pair instability mass gap.
“The fact that we’re seeing a black hole in this mass gap will make a lot of astrophysicists scratch their heads and try to figure out how these black holes were made,” says Christensen, who is the director of the Artemis Laboratory at the Nice Observatory in France.
One possibility, which the researchers consider in their second paper, is of a hierarchical merger, in which the two progenitor black holes themselves may have formed from the merging of two smaller black holes, before migrating together and eventually merging.
“This event opens more questions than it provides answers,” says LIGO member Alan Weinstein, professor of physics at Caltech. “From the perspective of discovery and physics, it’s a very exciting thing.”
“Something unexpected”
There are many remaining questions regarding GW190521.
As LIGO and Virgo detectors listen for gravitational waves passing through Earth, automated searches comb through the incoming data for interesting signals. These searches can use two different methods: algorithms that pick out specific wave patterns in the data that may have been produced by compact binary systems; and more general “burst” searches, which essentially look for anything out of the ordinary.
LIGO member Salvatore Vitale, assistant professor of physics at MIT, likens compact binary searches to “passing a comb through data, that will catch things in a certain spacing,” in contrast to burst searches that are more of a “catch-all” approach.
In the case of GW190521, it was a burst search that picked up the signal slightly more clearly, opening the very small chance that the gravitational waves arose from something other than a binary merger.
“The bar for asserting we’ve discovered something new is very high,” Weinstein says. “So we typically apply Occam’s razor: The simpler solution is the better one, which in this case is a binary black hole.”
But what if something entirely new produced these gravitational waves? It’s a tantalizing prospect, and in their paper the scientists briefly consider other sources in the universe that might have produced the signal they detected. For instance, perhaps the gravitational waves were emitted by a collapsing star in our galaxy. The signal could also be from a cosmic string produced just after the universe inflated in its earliest moments — although neither of these exotic possibilities matches the data as well as a binary merger.
“Since we first turned on LIGO, everything we’ve observed with confidence has been a collision of black holes or neutron stars,” Weinstein says “This is the one event where our analysis allows the possibility that this event is not such a collision.  Although this event is consistent with being from an exceptionally massive binary black hole merger, and alternative explanations are disfavored, it is pushing the boundaries of our confidence. And that potentially makes it extremely exciting. Because we have all been hoping for something new, something unexpected, that could challenge what we’ve learned already. This event has the potential for doing that.”
This research was funded by the U.S. National Science Foundation.
A “bang” in LIGO and Virgo detectors signals most massive gravitational-wave source yet syndicated from https://osmowaterfilters.blogspot.com/
0 notes