Tumgik
themoreuknowscience · 6 years
Photo
Tumblr media
The sandbox tree, also known as the “Dynamite Tree”, is covered in spikes, full of poison, and grows exploding fruits. The fruit looks like little pumpkins, but when they fully mature they explode with a loud bang and fling their seeds at up to 150 miles per hour.
693 notes · View notes
themoreuknowscience · 6 years
Photo
Tumblr media
A blueprint for future blood-nerve barrier and peripheral nerve disease research
Human peripheral nerves — all the nerves outside of the central nervous system — are protected by the blood-nerve barrier. This is a tight covering of endothelial cells that maintains the microenvironment within the nerves by restricting the amounts or types of water, ions, solutes and nutrients that can reach the axons, or electric cables within the nerves, from the blood circulation system.
This allows the nerves to function.
“I describe these endothelial cells as a gate or door that controls what goes into and out of the nerve; it is the gateway between the systemic blood circulation and the peripheral nerves,” said Eroboghene Ubogu, M.D., professor of neurology at the University of Alabama at Birmingham.
Little is known about the components that make up this door, and without that knowledge, neurologists like Ubogu are hard-pressed to develop specific treatments for the 20 million to 30 million U.S. patients, and hundreds of millions worldwide, with peripheral nerve disease. “If we don’t understand what makes up this door that allows materials to go in or out, and how the door really works, how can we come up with specific treatments when nerves do not work?” Ubogu said.
In research published in Scientific Reports, Ubogu and UAB colleagues — for the first time — describe the transcriptome of these specialized cells called endoneurial endothelial cells, finding 12,881 RNA transcripts that define the normal human blood-nerve barrier. These messenger RNAs are the templates for a cell’s building blocks, the proteins that provide structure and function to the living cell.
Previous research on the blood-nerve barrier tended to look at just one or a few cell components at a time. The transcriptome reveals every component active in normal endoneurial endothelial cells that form the human blood-nerve barrier.
“It is as if previously we worked before with a little flashlight,” said Ubogu, who has studied the blood-nerve barrier since 2007. “This is a huge, revealing floodlight. For example, I probably knew no more than six components of the tight junctions present at the blood-nerve barrier. With this paper, we came up with 133 components involved in tight and adherens junctions. This is like a dream come true.”
Knowledge of normal RNA and protein expression in the endoneurial endothelial cells provides an essential blueprint or reference guide. This guide will help physicians and researchers understand how peripheral nerves are kept healthy and help clinicians and medical chemists figure out which transporters are active in endoneurial endothelial cells, so they can design drug treatments that can actually reach the nerves or are prevented from causing toxic damage to nerves. The guide can also direct translational research in peripheral neuropathies by observing how components may be disrupted or altered during disease or injury, and help develop better treatments for chronic pain.
Ubogu’s study started from normal frozen human sural nerves preserved in the Shin J. Oh Muscle and Nerve Histopathology Laboratory at UAB. The sural nerve, found in the outer calf region of the leg, is commonly biopsied as part of certain peripheral neuropathy workups.
The UAB team isolated RNA transcripts from the blood-nerve barrier forming microvessels directly from the frozen sural nerve tissue using a specialized technique called laser-capture microdissection. At least 200 microvessels were collected from two female and two male adults who had normal nerve biopsies. The team also isolated RNA from purified endoneurial endothelial cells previously isolated from an adult woman and grown in tissue culture. They isolated RNA from three passages, or early, and eight passages, or late, for this study. The early and late comparison was to make sure the RNA did not change in these cells because of tissue culture.
RNA from the endoneurial microvessels and endothelial cells was sequenced. For the microvessels from the biopsies, called the in situ blood-nerve barrier, transcripts had to agree for at least three of the four sources. For the endoneurial endothelial cells from tissue culture, called in vitro blood-nerve barrier, transcripts had to agree at both passages. The researchers found 12,881 RNA transcripts that were common to the in situ and in vitro blood-nerve barrier. The tissue-cultured endoneurial endothelial cells acted as a control to correct for possible contamination of the in situ blood-nerve barrier by cells like pericytes and leukocytes present with microvessels during laser-capture microdissection.
The transcriptome was validated two ways. First, the transcriptome was found to include previously identified vascular endothelial markers, enzymes, scavenger receptors, mitogen receptors, nutrient transporters, cellular adhesion molecules, chemokines, adherens and tight junction, and junction associated molecules. Second, the researchers showed expression, as detected by indirect fluorescent immunohistochemistry, of specific proteins that were identified by this study in the sural nerve endoneurial microvessels of another adult woman with a normal biopsy. This included markers that had and had not been previously identified in these endothelial cells — 31 selected cell membrane, chemokine receptor, cytoskeletal, junctional complex and secreted proteins.
Ubogu expects a host of translational work to build upon this research.
Knowledge of the components and regulators of small molecule and macromolecular transport unique to the human blood-nerve barrier can aid development of drugs that can use the array of influx transporters, channels and receptor-mediated transcytosis components to reach the nerves. This is important in developing effective drugs for peripheral neuropathies and treating chronic neuropathic pain, a condition that affects 1 percent to 10 percent of people worldwide. This is crucially important as the world deals with the opioid crisis and seeks better treatments, with fewer side effects, for chronic pain.
Knowing the molecules relevant for growth of blood vessels and formation of intercellular junction complexes could guide therapeutic strategies to repair peripheral nerves after traumatic injury. This knowledge could also help restore and preserve peripheral nerve function in patients with peripheral neuropathies from other reasons, such as diabetes and cancer.
Ubogu says the study provides essential information on the possible determinants of leukocyte trafficking during normal immunosurveillance and the biological networks that may be involved in peripheral nerve innate and adaptive immune responses. This could improve our understanding of how the human blood-nerve barrier responds to injury, viral infections or microbial entry from the bloodstream into peripheral nerves.
The work could also help us better understand the pathogenesis and targeted treatment of peripheral nerve-restricted autoimmune disorders such as Guillain-Barré syndrome and chronic inflammatory demyelinating polyradiculoneuropathy, two conditions that can lead to loss of productivity and economic independence, chronic pain or disability.
“The unique resources within the UAB neuromuscular division and the collaboration with the UAB Heflin Center for Genomic Science were essential to this project to figure out the human blood-nerve barrier transcriptome as quickly and comprehensively as we did,” Ubogu said.
114 notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media
THE SPACE CORPS IS COMING
Can a star war be far behind?
Lawmakers within the House Armed Services Committee have introduced legislation that would require the U.S. Air Force to establish a “Space Corps” as a distinct branch of the military by January 1, 2019, according to Space News. The proposed legislation would create a Space Corps to serve “as a separate military service within the Department of the Air Force and under the civilian leadership of the Secretary of the Air Force.”
110 notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media
358K notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
8K notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media Tumblr media Tumblr media
It’s a low maintenance and a fairly cheap way to fight marine pollution
“One of the goals is to make the Seabin from our own plastics to create another Seabin to capture more, it’s a domino effect”
You can help the project by contributing onIndiegogo
138 notes · View notes
themoreuknowscience · 7 years
Text
50 WAYS TO BECOME A BETTER ENTREPRENEUR
Here are 50 things to keep in mind if you want to be a better entrepreneur:
Don’t let emotions cloud your decisions.
Accept criticism, no matter who gives it to you.
Never stop networking.
Learn from your own mistakes.
Learn from other people’s mistakes.
Around every corner lies an opportunity for you to sell something.
Don’t get too greedy… pigs get fat and hogs get slaughtered.
Try not to mix your family life with your business life.
No matter how successful you are, you shouldn’t stop learning.
Spending money on good lawyers and accountants will save you more money in the long run.
Don’t pick a stupid company name and if you do, don’t change it later on.
Hiring employees won’t solve most of your problems.
Be agile because slow and steady won’t win the race.
Being agile isn’t enough, you also have to be scrappy too.
Having a good business partner will be a key factor in your success.
Don’t be afraid of the unknown.
It is easier to save money than it is to make it.
You don’t always have to innovate; there is nothing wrong with copying.
Have a marketing plan.
Don’t under estimate your competition; you can’t always know what they are doing.
Watching movies like Boiler Room, will teach you how to sell.
If you don’t have a business mentor, you better get one.
Your income will be the average of your 5 closest friends, so pick them wisely.
Diversifying is a good way to play things safe.
It doesn’t matter what you want, it only matters what your customers want.
When others are fearful, you should be greedy. And when they are greedy you should be fearful.
You don’t always have to pay for advice. You’ll be amazed with the free advice you can get pick up from the web.
The best chances you have of becoming rich is through your willingness of working hard.
Even the most idiotic business idea can make money.
Sex sells and it always will.
An easy way to make more money is to up sell to your current customer base.
Base your business decisions around metrics.
There is no such thing as a safe bet.
You don’t have to start a business to be successful.
Raising venture capital is harder than being struck by lightening.
Staying under the radar isn’t always a bad thing. Being out in the open is a great way to attract more competitors.
Learn to be a team player.
If you ever get screwed over, think twice before you burn the bridge.
Learn to manage both your personal and business money.
Live in a location filled with entrepreneurs.
If you don’t take any risks, there will not be any rewards.
Don’t let anything stand in your way.
Sometimes you have to wait for good deals to come to you.
The smartest route isn’t always the easiest route.
Being too aggressive can backfire.
With networking, it isn’t about whom you know, it is about whom your network knows.
It’s never a bad thing to know too many rich people. Whether you like them or not, they can always come in handy. So make sure you always play nice with them.
Use your email signature to promote your business.
Don’t be afraid of social media. It is a great channel for customer acquisition.
You’ll learn more from starting your own business, than going to business school.
Hopefully these suggestions will help you improve your entrepreneurial skills. And if you have any other suggestions? Feel free to leave a comment.
376 notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Netherlands-Based Architect Designs Giant Floating Sea Wall That Harvests Wave Power
Oceans have an abundance of renewable energy which, due to the lack of proper technologies, sadly goes to waste. Netherlands-based innovator Koen Olthuis, and his firm Waterstudio.NL, have come up with an innovative way of harvesting untapped wave power, in the form of a giant floating breakwater with towering columns similar to that of the Parthenon.
Aptly called the Parthenon, this colossal sea wall reduces the intensity of the waves crashing into the harbor, while also gathering the energy produced during the process. It basically serves as a permeable, floating breakwater that in turn harvests and converts wave power into electrical energy. The renderings showcase the Hudson River, in the United States. Speaking about the plan, the firm’s spokesperson said:
The floating breakwater lives with the force of the river instead of fighting it… In a harbour on the Hudson river in New York the wave conditions are so strong that a sea wall must protect its boats. The strong current in the river is constantly attacking it and water is pushing itself against and through the fixed wall, which results in more corrosion of the sea wall every year.
The sea wall is made up of several massive columns, containing 3-foot cylinders that are capable of rotating both clockwise and anticlockwise, at low speeds. The incoming waves rotate the cylinders, as a result of which energy is created. This energy is then captured and stored inside a concrete box, situated in the floating platform. The cylinders are actually filled with water, for greater structural stability and flexibility.
The sea wall needs to be anchored to the river or seabed. According to the developers, the structure can also act as a green space, for planting trees and shrubs. It can even double as a boulevard. The firm added:
The Parthenon blue energy sea wall resembles the column structure of the famous ancient temple in Greece, but divers see it as a part of the sunken city of Atlantis.
To know more about Waterstudio.NL and its various project, visit the firm’s official website.
Via: Inhabitat
53 notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
125 notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media
European Space Agency plans to build a lunar base by 2030s
The European Space Agency (ESA) considers the Moon as the next destination for humans venturing beyond Low Earth Orbit and an integral element of the roadmap towards humans missions to Mars.
Bases on the Moon built by 3D printers could be a reality in the next decade or so, a recent symposium “Moon 2020-2030″ of 200 scientists, engineers, and industry experts has concluded.
The plan outlined by the ESA is that, starting from the early 2020s, robots will be sent to the Moon to begin constructing various facilities, followed a few years later by the first inhabitants.
Setting up a lunar base could be made much simpler by using a 3D printer to build it from local materials. Industrial partners including renowned architects Foster + Partners have joined with ESA to test the feasibility of 3D printing using lunar soil.
“Terrestrial 3D printing technology has produced entire structures. Our industrial team investigated if it could similarly be employed to build a lunar habitat, ” said Laurent Pambaguian, heading the project for ESA.
“3D printing offers a potential means of facilitating lunar settlement with reduced logistics from Earth,” added Scott Hovland of ESA’s human spaceflight team.
Illustration credit ESA/Foster + Partners
941 notes · View notes
themoreuknowscience · 7 years
Text
Chomsky was right: We do have a "grammar" in our head
A team of neuroscientists has found new support for MIT linguist Noam Chomsky’s decades-old theory that we possess an “internal grammar” that allows us to comprehend even nonsensical phrases.
“One of the foundational elements of Chomsky’s work is that we have a grammar in our head, which underlies our processing of language,” explains David Poeppel, the study’s senior researcher and a professor in New York University’s Department of Psychology. “Our neurophysiological findings support this theory: we make sense of strings of words because our brains combine words into constituents in a hierarchical manner—a process that reflects an ‘internal grammar’ mechanism.”
Tumblr media
The research, which appears in the latest issue of the journal Nature Neuroscience, builds on Chomsky’s 1957 work, Syntactic Structures (1957). It posited that we can recognize a phrase such as “Colorless green ideas sleep furiously” as both nonsensical and grammatically correct because we have an abstract knowledge base that allows us to make such distinctions even though the statistical relations between words are non-existent.
Neuroscientists and psychologists predominantly reject this viewpoint, contending that our comprehension does not result from an internal grammar; rather, it is based on both statistical calculations between words and sound cues to structure. That is, we know from experience how sentences should be properly constructed—a reservoir of information we employ upon hearing words and phrases. Many linguists, in contrast, argue that hierarchical structure building is a central feature of language processing.
In an effort to illuminate this debate, the researchers explored whether and how linguistic units are represented in the brain during speech comprehension.
To do so, Poeppel, who is also director of the Max Planck Institute for Empirical Aesthetics in Frankfurt, and his colleagues conducted a series of experiments using magnetoencephalography (MEG), which allows measurements of the tiny magnetic fields generated by brain activity, and electrocorticography (ECoG), a clinical technique used to measure brain activity in patients being monitored for neurosurgery.
The study’s subjects listened to sentences in both English and Mandarin Chinese in which the hierarchical structure between words, phrases, and sentences was dissociated from intonational speech cues—the rise and fall of the voice—as well as statistical word cues. The sentences were presented in an isochronous fashion—identical timing between words—and participants listened to both predictable sentences (e.g., “New York never sleeps,” “Coffee keeps me awake”), grammatically correct, but less predictable sentences (e.g., “Pink toys hurt girls”), or word lists (“eggs jelly pink awake”) and various other manipulated sequences.
The design allowed the researchers to isolate how the brain concurrently tracks different levels of linguistic abstraction—sequences of words (“furiously green sleep colorless”), phrases (“sleep furiously” “green ideas”), or sentences (“Colorless green ideas sleep furiously”)—while removing intonational speech cues and statistical word information, which many say are necessary in building sentences.
Their results showed that the subjects’ brains distinctly tracked three components of the phrases they heard, reflecting a hierarchy in our neural processing of linguistic structures: words, phrases, and then sentences—at the same time.
“Because we went to great lengths to design experimental conditions that control for statistical or sound cue contributions to processing, our findings show that we must use the grammar in our head,” explains Poeppel. “Our brains lock onto every word before working to comprehend phrases and sentences. The dynamics reveal that we undergo a grammar-based construction in the processing of language.”
This is a controversial conclusion from the perspective of current research, the researchers note, because the notion of abstract, hierarchical, grammar-based structure building is rather unpopular.
767 notes · View notes
themoreuknowscience · 7 years
Photo
Tumblr media Tumblr media Tumblr media
23 Mad Scientists (That Actually Existed)
920 notes · View notes
themoreuknowscience · 8 years
Photo
Tumblr media
TODAY IN HISTORY: Vintage snapshots capture the fateful launch of the Space Shuttle Challenger on the day it was destroyed, 28 January 1986.
(MisterCommodore via io9)
893 notes · View notes
themoreuknowscience · 8 years
Link
Researchers have identified a metal that conducts electricity without conducting heat - an incredibly useful property that defies our current understanding of how conductors work.
The metal contradicts something called the Wiedemann-Franz Law, which basically states that good conductors of electricity will also be proportionally good conductors of heat, which is why things like motors and appliances get so hot when you use them regularly.
But a team in the US has shown that this isn’t the case for metallic vanadium dioxide (VO2) - a material that’s already well known for its strange ability to switch from a see-through insulator to a conductive metal at the temperature of 67 degrees Celsius (152 degrees Fahrenheit).
“This was a totally unexpected finding,” said lead researcher Junqiao Wu, from Berkeley Lab’s Materials Sciences Division.
“It shows a drastic breakdown of a textbook law that has been known to be robust for conventional conductors. This discovery is of fundamental importance for understanding the basic electronic behaviour of novel conductors.”
Continue Reading.
1K notes · View notes
themoreuknowscience · 8 years
Photo
Tumblr media
Mathematical Model Sheds Light on How the Brain Makes New Memories While Preserving the Old
Columbia scientists have developed a new mathematical model that helps to explain how the human brain’s biological complexity allows it to lay down new memories without wiping out old ones — illustrating how the brain maintains the fidelity of memories for years, decades or even a lifetime. This model could help neuroscientists design more targeted studies of memory, and also spur advances in neuromorphic hardware — powerful computing systems inspired by the human brain.
This work is published online today in Nature Neuroscience.
“The brain is continually receiving, organizing and storing memories. These processes, which have been studied in countless experiments, are so complex that scientists have been developing mathematical models in order to fully understand them,” said Stefano Fusi, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, associate professor of neuroscience at Columbia University Medical Center and the paper’s senior author. “The model that we have developed finally explains why the biology and chemistry underlying memory are so complex — and how this complexity drives the brain’s ability to remember.”
Memories are widely believed to be stored in synapses, tiny structures on the surface of neurons. These synapses act as conduits, transmitting the information housed inside electrical pulses that normally pass from neuron to neuron. In the earliest memory models, the strength of electrical signals that passed through synapses was compared to a volume knob on a stereo; it dialed up to boost (or down to lower) the connection strength between neurons. This allowed for the formation of memories.
These models worked extremely well, as they accounted for enormous memory capacity. But they also posed an intriguing dilemma.
“The problem with a simple, dial-like model of how synapses function was that it was assumed their strength could be dialed up or down indefinitely,” said Dr. Fusi, who is also a member of Columbia’s Center for Theoretical Neuroscience. “But in the real world this can’t happen. Whether it’s the volume knob on a stereo, or any biological system, there has to be a physical limit to how much it could turn.”
When these limits were imposed, the memory capacity of these models collapsed. So Dr. Fusi, in collaboration with fellow Zuckerman Institute investigator Larry Abbott, PhD, an expert in mathematical modeling of the brain, offered an alternative: each synapse is more complex than just one dial, and instead should be described as a system with multiple dials.
In 2005, Drs. Fusi and Abbott published research explaining this idea. They described how different dials (perhaps representing clusters of molecules) within a synapse could operate in tandem to form new memories while protecting old ones. But even that model, the authors later realized, fell short of what they believed the brain — particularly the human brain — could hold.
“We came to realize that the various synaptic components, or dials, not only functioned at different timescales, but were also likely communicating with each other,” said Marcus Benna, PhD, an associate research scientist at Columbia’s Center for Theoretical Neuroscience and the first author of the Nature Neuroscience paper. “Once we added the communication between components to our model, the storage capacity increased by an enormous factor, becoming far more representative of what is achieved inside the living brain.”
Dr. Benna likened the components of this new model to a system of beakers connected to each other through a series of tubes.
“In a set of interconnected beakers, each filled with different amounts of water, the liquid will tend to flow between them such that the water levels become equalized. In our model, the beakers represent the various components within a synapse,” explained Dr. Benna. “Adding liquid to one of the beakers — or removing some of it — represents the encoding of new memories. Over time, the resulting flow of liquid will diffuse across the other beakers, corresponding to the long-term storage of memories.’’
Drs. Benna and Fusi are hopeful that this work can help neuroscientists in the lab, by acting as a theoretical framework to guide future experiments — ultimately leading to a more complete and more detailed characterization of the brain.
“While the synaptic basis of memory is well accepted, in no small part due to the work of Nobel laureate and Zuckerman Institute codirector Dr. Eric Kandel, clarifying how synapses support memories over many years without degradation has been extremely difficult,” said Dr. Abbott. “The work of Drs. Benna and Fusi should serve as a guide for researchers exploring the molecular complexity of the synapse.”
The technological implications of this model are also promising. Dr. Fusi has long been intrigued by neuromorphic hardware, computers that are designed to imitate a biological brain.
“Today, neuromorphic hardware is limited by memory capacity, which can be catastrophically low when these systems are designed to learn autonomously,” said Dr. Fusi. “Creating a better model of synaptic memory could help to solve this problem, speeding up the development of electronic devices that are both compact and energy efficient — and just as powerful as the human brain.”
230 notes · View notes
themoreuknowscience · 8 years
Photo
Tumblr media
36K notes · View notes
themoreuknowscience · 8 years
Photo
Tumblr media
The Strange Second Life of String Theory
String theory has so far failed to live up to its promise as a way to unite gravity and quantum mechanics. At the same time, it has blossomed into one of the most useful sets of tools in science.
String theory strutted onto the scene some 30 years ago as perfection itself, a promise of elegant simplicity that would solve knotty problems in fundamental physics — including the notoriously intractable mismatch between Einstein’s smoothly warped space-time and the inherently jittery, quantized bits of stuff that made up everything in it.
It seemed, to paraphrase Michael Faraday, much too wonderful not to be true: Simply replace infinitely small particles with tiny (but finite) vibrating loops of string. The vibrations would sing out quarks, electrons, gluons and photons, as well as their extended families, producing in harmony every ingredient needed to cook up the knowable world. Avoiding the infinitely small meant avoiding a variety of catastrophes. For one, quantum uncertainty couldn’t rip space-time to shreds. At last, it seemed, here was a workable theory of quantum gravity.
Even more beautiful than the story told in words was the elegance of the math behind it, which had the power to make some physicists ecstatic.
To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space. These dimensions were folded in on themselves — or “compactified” — into complex origami shapes. No one knew just how the dimensions were compactified — the possibilities for doing so appeared to be endless — but surely some configuration would turn out to be just what was needed to produce familiar forces and particles.
For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”
Tumblr media
David Gross, a Nobel Prize-winning physicist at the Kavli Institute for Theoretical Physics, has publicly argued that fundamental physics faces a crisis.
And then physicists began to realize that the dream of one singular theory was an illusion. The complexities of string theory, all the possible permutations, refused to reduce to a single one that described our world. “After a certain point in the early ’90s, people gave up on trying to connect to the real world,” Gross said. “The last 20 years have really been a great extension of theoretical tools, but very little progress on understanding what’s actually out there.”
Many, in retrospect, realized they had raised the bar too high. Coming off the momentum of completing the solid and powerful “standard model” of particle physics in the 1970s, they hoped the story would repeat — only this time on a mammoth, all-embracing scale. “We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”
Like many a maturing beauty, string theory has gotten rich in relationships, complicated, hard to handle and widely influential. Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist.
The mathematics that have come out of string theory have been put to use in fields such as cosmology and condensed matter physics — the study of materials and their properties. It’s so ubiquitous that “even if you shut down all the string theory groups, people in condensed matter, people in cosmology, people in quantum gravity will do it,” Dijkgraaf said.
“It’s hard to say really where you should draw the boundary around and say: This is string theory; this is not string theory,” said Douglas Stanford, a physicist at the IAS. “Nobody knows whether to say they’re a string theorist anymore,” said Chris Beem, a mathematical physicist at the University of Oxford. “It’s become very confusing.”
String theory today looks almost fractal. The more closely people explore any one corner, the more structure they find. Some dig deep into particular crevices; others zoom out to try to make sense of grander patterns. The upshot is that string theory today includes much that no longer seems stringy. Those tiny loops of string whose harmonics were thought to breathe form into every particle and force known to nature (including elusive gravity) hardly even appear anymore on chalkboards at conferences. At last year’s big annual string theory meeting, the Stanford University string theorist Eva Silverstein was amused to find she was one of the few giving a talk “on string theory proper,” she said. A lot of the time she works on questions related to cosmology.
Even as string theory’s mathematical tools get adopted across the physical sciences, physicists have been struggling with how to deal with the central tension of string theory: Can it ever live up to its initial promise? Could it ever give researchers insight into how gravity and quantum mechanics might be reconciled — not in a toy universe, but in our own?
“The problem is that string theory exists in the landscape of theoretical physics,” said Juan Maldacena, a mathematical physicist at the IAS and perhaps the most prominent figure in the field today. “But we still don’t know yet how it connects to nature as a theory of gravity.” Maldacena now acknowledges the breadth of string theory, and its importance to many fields of physics — even those that don’t require “strings” to be the fundamental stuff of the universe — when he defines string theory as “Solid Theoretical Research in Natural Geometric Structures.”
Tumblr media
Eva Silverstein, a professor of physics at Stanford University, applies string theory to problems in cosmology.
An Explosion of Quantum Fields
One high point for string theory as a theory of everything came in the late 1990s, when Maldacena revealed that a string theory including gravity in five dimensions was equivalent to a quantum field theory in four dimensions. This “AdS/CFT” duality appeared to provide a map for getting a handle on gravity — the most intransigent piece of the puzzle — by relating it to good old well-understood quantum field theory.
This correspondence was never thought to be a perfect real-world model. The five-dimensional space in which it works has an “anti-de Sitter” geometry, a strange M.C. Escher-ish landscape that is not remotely like our universe.
But researchers were surprised when they dug deep into the other side of the duality. Most people took for granted that quantum field theories — “bread and butter physics,” Dijkgraaf calls them — were well understood and had been for half a century. As it turned out, Dijkgraaf said, “we only understand them in a very limited way.”
These quantum field theories were developed in the 1950s to unify special relativity and quantum mechanics. They worked well enough for long enough that it didn’t much matter that they broke down at very small scales and high energies. But today, when physicists revisit “the part you thought you understood 60 years ago,” said Nima Arkani-Hamed, a physicist at the IAS, you find “stunning structures” that came as a complete surprise. “Every aspect of the idea that we understood quantum field theory turns out to be wrong. It’s a vastly bigger beast.”
Researchers have developed a huge number of quantum field theories in the past decade or so, each used to study different physical systems. Beem suspects there are quantum field theories that can’t be described even in terms of quantum fields. “We have opinions that sound as crazy as that, in large part, because of string theory.”
This virtual explosion of new kinds of quantum field theories is eerily reminiscent of physics in the 1930s, when the unexpected appearance of a new kind of particle — the muon — led a frustrated I.I. Rabi to ask: “Who ordered that?” The flood of new particles was so overwhelming by the 1950s that it led Enrico Fermi to grumble: “If I could remember the names of all these particles, I would have been a botanist.”
Physicists began to see their way through the thicket of new particles only when they found the more fundamental building blocks making them up, like quarks and gluons. Now many physicists are attempting to do the same with quantum field theory. In their attempts to make sense of the zoo, many learn all they can about certain exotic species.
Conformal field theories (the right hand of AdS/CFT) are a starting point. You start with a simplified type of quantum field theory that behaves the same way at small and large distances, said David Simmons-Duffin, a physicist at the IAS. If these specific kinds of field theories could be understood perfectly, answers to deep questions might become clear. “The idea is that if you understand the elephant’s feet really, really well, you can interpolate in between and figure out what the whole thing looks like.”
Tumblr media
Juan Maldacena, a physicist at the Institute for Advanced Study, developed what has become one of string theory’s greatest successes.
Like many of his colleagues, Simmons-Duffin says he’s a string theorist mostly in the sense that it’s become an umbrella term for anyone doing fundamental physics in underdeveloped corners. He’s currently focusing on a physical system that’s described by a conformal field theory but has nothing to do with strings. In fact, the system is water at its “critical point,” where the distinction between gas and liquid disappears. It’s interesting because water’s behavior at the critical point is a complicated emergent system that arises from something simpler. As such, it could hint at dynamics behind the emergence of quantum field theories.
Beem focuses on supersymmetric field theories, another toy model, as physicists call these deliberate simplifications. “We’re putting in some unrealistic features to make them easier to handle,” he said. Specifically, they are amenable to tractable mathematics, which “makes it so a lot of things are calculable.”
Toy models are standard tools in most kinds of research. But there’s always the fear that what one learns from a simplified scenario does not apply to the real world. “It’s a bit of a deal with the devil,” Beem said. “String theory is a much less rigorously constructed set of ideas than quantum field theory, so you have to be willing to relax your standards a bit,” he said. “But you’re rewarded for that. It gives you a nice, bigger context in which to work.”
It’s the kind of work that makes people such as Sean Carroll, a theoretical physicist at the California Institute of Technology, wonder if the field has strayed too far from its early ambitions — to find, if not a “theory of everything,” at least a theory of quantum gravity. “Answering deep questions about quantum gravity has not really happened,” he said. “They have all these hammers and they go looking for nails.” That’s fine, he said, even acknowledging that generations might be needed to develop a new theory of quantum gravity. “But it isn’t fine if you forget that, ultimately, your goal is describing the real world.”
It’s a question he has asked his friends. Why are they investigating detailed quantum field theories? “What’s the aspiration?” he asks. Their answers are logical, he says, but steps removed from developing a true description of our universe.
Instead, he’s looking for a way to “find gravity inside quantum mechanics.” A paper he recently wrote with colleagues claims to take steps toward just that. It does not involve string theory.
The Broad Power of Strings
Perhaps the field that has gained the most from the flowering of string theory is mathematics itself. Sitting on a bench beside the IAS pond while watching a blue heron saunter in the reeds, Clay Córdova, a researcher there, explained how what seemed like intractable problems in mathematics were solved by imagining how the question might look to a string. For example, how many spheres could fit inside a Calabi-Yau manifold — the complex folded shape expected to describe how spacetime is compactified? Mathematicians had been stuck. But a two-dimensional string can wiggle around in such a complex space. As it wiggled, it could grasp new insights, like a mathematical multidimensional lasso. This was the kind of physical thinking Einstein was famous for: thought experiments about riding along with a light beam revealed E=mc2. Imagining falling off a building led to his biggest eureka moment of all: Gravity is not a force; it’s a property of space-time.
Tumblr media
The amplituhedron is a multi-dimensional object that can be used to calculate particle interactions. Physicists such as Chris Beem are applying techniques from string theory in special geometries where “the amplituhedron is its best self,” he says.
Using the physical intuition offered by strings, physicists produced a powerful formula for getting the answer to the embedded sphere question, and much more. “They got at these formulas using tools that mathematicians don’t allow,” Córdova said. Then, after string theorists found an answer, the mathematicians proved it on their own terms. “This is a kind of experiment,” he explained. “It’s an internal mathematical experiment.” Not only was the stringy solution not wrong, it led to Fields Medal-winning mathematics. “This keeps happening,” he said.
String theory has also made essential contributions to cosmology. The role that string theory has played in thinking about mechanisms behind the inflationary expansion of the universe — the moments immediately after the Big Bang, where quantum effects met gravity head on — is “surprisingly strong,” said Silverstein, even though no strings are attached.
Still, Silverstein and colleagues have used string theory to discover, among other things, ways to see potentially observable signatures of various inflationary ideas. The same insights could have been found using quantum field theory, she said, but they weren’t. “It’s much more natural in string theory, with its extra structure.”
Inflationary models get tangled in string theory in multiple ways, not least of which is the multiverse — the idea that ours is one of a perhaps infinite number of universes, each created by the same mechanism that begat our own. Between string theory and cosmology, the idea of an infinite landscape of possible universes became not just acceptable, but even taken for granted by a large number of physicists. The selection effect, Silverstein said, would be one quite natural explanation for why our world is the way it is: In a very different universe, we wouldn’t be here to tell the story.
This effect could be one answer to a big problem string theory was supposed to solve. As Gross put it: “What picks out this particular theory” — the Standard Model — from the “plethora of infinite possibilities?”
Silverstein thinks the selection effect is actually a good argument for string theory. The infinite landscape of possible universes can be directly linked to “the rich structure that we find in string theory,” she said — the innumerable ways that string theory’s multidimensional space-time can be folded in upon itself.
Building the New Atlas
At the very least, the mature version of string theory — with its mathematical tools that let researchers view problems in new ways — has provided powerful new methods for seeing how seemingly incompatible descriptions of nature can both be true. The discovery of dual descriptions of the same phenomenon pretty much sums up the history of physics. A century and a half ago, James Clerk Maxwell saw that electricity and magnetism were two sides of a coin. Quantum theory revealed the connection between particles and waves. Now physicists have strings.
Tumblr media
Nima Arkani‐Hamed, a physicist at the IAS, argues that this is the most exciting time for theoretical physics since the development of quantum mechanics in the 1920s.
“Once the elementary things we’re probing spaces with are strings instead of particles,” said Beem, the strings “see things differently.” If it’s too hard to get from A to B using quantum field theory, reimagine the problem in string theory, and “there’s a path,” Beem said.
In cosmology, string theory “packages physical models in a way that’s easier to think about,” Silverstein said. It may take centuries to tie together all these loose strings to weave a coherent picture, but young researchers like Beem aren’t bothered a bit. His generation never thought string theory was going to solve everything. “We’re not stuck,” he said. “It doesn’t feel like we’re on the verge of getting it all sorted, but I know more each day than I did the day before – and so presumably we’re getting somewhere.”
Stanford thinks of it as a big crossword puzzle. “It’s not finished, but as you start solving, you can tell that it’s a valid puzzle,” he said. “It’s passing consistency checks all the time.”
“Maybe it’s not even possible to capture the universe in one easily defined, self-contained form, like a globe,” Dijkgraaf said, sitting in Robert Oppenheimer’s many windowed office from when he was Einstein’s boss, looking over the vast lawn at the IAS, the pond and the woods in the distance. Einstein, too, tried and failed to find a theory of everything, and it takes nothing away from his genius.
“Perhaps the true picture is more like the maps in an atlas, each offering very different kinds of information, each spotty,” Dijkgraaf said. “Using the atlas will require that physics be fluent in many languages, many approaches, all at the same time. Their work will come from many different directions, perhaps far-flung.”
He finds it “totally disorienting” and also “fantastic.”
Arkani-Hamed believes we are in the most exciting epoch of physics since quantum mechanics appeared in the 1920s. But nothing will happen quickly. “If you’re excited about responsibly attacking the very biggest existential physics questions ever, then you should be excited,” he said. “But if you want a ticket to Stockholm for sure in the next 15 years, then probably not.”
48 notes · View notes