#whengravityfails
Explore tagged Tumblr posts
Link
The ability to send thoughts directly to another person’s brain is the stuff of science fiction. At least, it used to be.
In recent years, physicists and neuroscientists have developed an armory of tools that can sense certain kinds of thoughts and transmit information about them into other brains. That has made brain-to-brain communication a reality.
These tools include electroencephalograms (EEGs) that record electrical activity in the brain and transcranial magnetic stimulation (TMS), which can transmit information into the brain.
In 2015, Andrea Stocco and his colleagues at the University of Washington in Seattle used this gear to connect two people via a brain-to-brain interface. The people then played a 20 questions–type game.
An obvious next step is to allow several people to join such a conversation, and today Stocco and his colleagues announced they have achieved this using a world-first brain-to-brain network. The network, which they call BrainNet, allows a small group to play a collaborative Tetris-like game. “Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem-solving by humans using a ‘social network’ of connected brains,” they say.
The technology behind the network is relatively straightforward. EEGs measure the electrical activity of the brain. They consist of a number of electrodes placed on the skull that can pick up electrical activity in the brain.
A key idea is that people can change the signals their brain produces relatively easily. For example, brain signals can easily become entrained with external ones. So watching a light flashing at 15 hertz causes the brain to emit a strong electrical signal at the same frequency. Switching attention to a light flashing at 17 Hz changes the frequency of the brain signal in a way an EEG can spot relatively easily.
TMS manipulates brain activity by inducing electrical activity in specific brain areas. For example, a magnetic pulse focused onto the occipital cortex triggers the sensation of seeing a flash of light, known as a phosphene.
Together, these devices make it possible to send and receive signals directly to and from the brain. But nobody has created a network that allows group communication. Until now.
Stocco and his colleagues have created a network that allows three individuals to send and receive information directly to their brains. They say the network is easily scalable and limited only by the availability of EEG and TMS devices.
The proof-of-principle network connects three people: two senders and one person able to receive and transmit, all in separate rooms and unable to communicate conventionally. The group together has to solve a Tetris-like game in which a falling block has to be rotated so that it fits into a space at the bottom of the screen.
The two senders, wearing EEGs, can both see the full screen. The game is designed so the shape of the descending block fits in the bottom row either if it is rotated by 180 degrees or if it is not rotated. The senders have to decide which and broadcast the information to the third member of the group.
To do this, they vary the signal their brains produce. If the EEG picks up a 15 Hz signal from their brains, it moves a cursor toward the right-hand side of the screen. When the cursor reaches the right-hand side, the device sends a signal to the receiver to rotate the block.
The senders can control their brain signals by staring at LEDs on either side of the screen—one flashing at 15 Hz and the other at 17 Hz.
The receiver, attached to an EEG and a TMS, has a different task. The receiver can see only the top half of the Tetris screen, and so can see the block but not how it should be rotated. However, the receiver receives signals via the TMS from each sender, saying either “rotate” or “do not rotate.”
The signals consist of a single phosphene to indicate the block must be rotated or no flash of light to indicate that it should not be rotated. So the data rate is low—just one bit per interaction.
Having received data from both senders, the receiver performs the action. But crucially, the game allows for another round of interaction.
The senders can see the block falling and so can determine whether the receiver has made the right call and transmit the next course of action—either rotate or not—in another round of communication.
This allows the researchers to have some fun. In some of the trials they deliberately change the information from one sender to see if the receiver can determine whether to ignore it. That introduces an element of error often reflected in real social situations.
But the question they investigate is whether humans can work out what to do when the data rates are so low. It turns out humans, being social animals, can distinguish between the correct and false information using the brain-to-brain protocol alone.
That’s interesting work that paves the way for more complex networks. The team says the information travels across a bespoke network set up between three rooms in their labs. However, there is no reason why the network cannot be extended to the Internet, allowing participants around the world to collaborate.
“A cloud-based brain-to-brain interface server could direct information transmission between any set of devices on the brain-to-brain interface network and make it globally operable through the Internet, thereby allowing cloud-based interactions between brains on a global scale,” Stocco and his colleagues say. “The pursuit of such brain-to-brain interfaces has the potential to not only open new frontiers in human communication and collaboration but also provide us with a deeper understanding of the human brain.”
#cyberpunk#ghostintheshell#ghost in the shell#deusex#deus ex#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#akira#futurology#neuroscience#philosophy
36 notes
·
View notes
Quote
When you’re lost in the rain in Juarez And it’s Eastertime too And your gravity fails And negativity don’t pull you through Don’t put on any airs
Bob Dylan - Just Like Tom Thumb's Blues
0 notes
Photo

talexanderr
#cyberpunk#bladerunner#blade runner#bladerunner2049#blade runner 2049#ghostintheshell#ghost in the shell#neuromancer#deusex#deus ex#williamgibson#william gibson#whengravityfails#when gravity fails#alteredcarbon#altered carbon#architecture
828 notes
·
View notes
Photo

Soldier with drone jammer
#cyberpunk#ghostintheshell#ghost in the shell#deusex#deus ex#nueromancer#williamgibson#william gibson#alteredcarbon#altered carbon#bladerunner#blade runner#bladerunner2049#blader runner 2049#technology#drones#soldier#turkey#whengravityfails#when gravity fails#war
1K notes
·
View notes
Photo

alloy.mantis
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#acronym#acrnm#techwear#arcteryx#veilance#goretex#nikeacg#acg#allconditionsgear#all conditions gear#fashion#streetwear
1K notes
·
View notes
Photo

5.12
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#deusex#deus ex#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#dredd#acronym#acrnm#bagjack#stoneisland#shadow project#stone island#shadowproject#stoneislandshadowproject#stone island shadow project#errolsonhugh#errolson hugh
725 notes
·
View notes
Photo

Cody Ellingham
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#acronym#acrnm#tokyo#japan#murakamiharuki#murakami haruki#surreal#photography
605 notes
·
View notes
Link
Scientists have developed a brain implant that noticeably boosted memory in its first serious test run, perhaps offering a promising new strategy to treat dementia, traumatic brain injuries and other conditions that damage memory.
The device works like a pacemaker, sending electrical pulses to aid the brain when it is struggling to store new information, but remaining quiet when it senses that the brain is functioning well.
In the test, reported Tuesday in the journal Nature Communications,the device improved word recall by 15 percent — roughly the amount that Alzheimer’s disease steals over two and half years.
The implant is still experimental; the researchers are currently in discussions to commercialize the technology. And its broad applicability is unknown, having been tested so far only in people with epilepsy.
Experts cautioned that the potential for misuse of any “memory booster” is enormous — A.D.H.D. drugs are widely used as study aids. They also said that a 15 percent improvement is fairly modest.
Still, the research marks the arrival of new kind of device: an autonomous aid that enhances normal, but less than optimal, cognitive function.
Doctors have used similar implants for years to block abnormal bursts of activity in the brain, most commonly in people with Parkinson’s disease and epilepsy.
“The exciting thing about this is that, if it can be replicated and extended, then we can use the same method to figure out what features of brain activity predict good performance,” said Bradley Voytek, an assistant professor of cognitive and data science at the University of California, San Diego.
The implant is based on years of work decoding brain signals, supported recently by more than $70 million from the Department of Defense to develop treatments for traumatic brain injury, the signature wound of the Iraq and Afghanistan wars.
The research team, led by scientists at the University of Pennsylvania and Thomas Jefferson University, last year reported that timed electrical pulses from implanted electrodes could reliably aid recall.
“It’s one thing to go back through your data, and find that the stimulation works. It’s another to have the program run on its own and watch it work in real time,” said Michael Kahana, a professor of psychology at the University of Pennsylvania and the senior author of the new study.
“Now that the technology is out of the box, all sorts of neuro-modulation algorithms could be used in this way,” he added.
Dr. Edward Chang, a professor of neurosurgery at the University of California, San Francisco, said, “Very similar approaches might be relevant for other applications, such as treating symptoms of depression or anxiety,” though the targets in the brain would be different.
The research team tested the memory aid in 25 people with epilepsy who were being evaluated for an operation.
The evaluation is a kind of fishing expedition, in which doctors thread an array of electrodes into the brain and wait for seizures to occur to see whether surgery might prevent them. Many of the electrodes are placed in the brain’s memory areas, and the wait can take weeks in the hospital.
Cognitive scientists use this period, with patients’ consent, to give memory tests and take recordings.
In the study, the research team determined the precise patterns for each person’s high-functioning state, when memory storage worked well in the brain, and low-functioning mode, when it did not.
The scientists then asked the patients to memorize lists of words and later, after a distraction, to recall as many as they could.
Each participant carried out a variety of tests repeatedly, recalling different words during each test. Some lists were memorized with the brain stimulation system turned on; others were done with it turned off, for comparison.
On average, people did about 15 percent better when the implant was switched on.
“I remember doing the tests, and enjoying it,” said David Mabrey, 47, a study participant who owns an insurance agency outside of Philadelphia. “It gave me something to do while lying there.”
“But I could not honestly tell how the stimulation was affecting my memory. You don’t feel anything; you don’t know whether it’s on or off.”
The new technology presents both risks and opportunities. Dr. Kahana said the implants could potentially sharpen memory more dramatically if the approach were refined to support retrieval — digging out the memory — rather than only storage.
Still, as currently devised, the implant requires that multiple electrodes be placed in the brain to determine its high- or low-functioning state (though stimulation is sent to just one location).
This makes it an extremely delicate operation that would likely be reserved only for severe cases of impairment — and certainly not for students cramming for tests, Dr. Voytek said.
“Ideally we can find other, less invasive ways to switch the brain from these lower to higher functioning states,” he said. “I don’t know what those would be, but eventually we’re going to have to work out the ethical and public policy questions raised by this technology.”
#cyberpunk#ghostintheshell#ghost in the shell#deusex#deus ex#bladerunner#blade runner#bladerunner2049#alteredcarbon#altered carbon#whengravityfails#when gravity#augmentation#human augmentation#technology#biology
118 notes
·
View notes
Photo

Special Air Service
#cyberpunk#ghostintheshell#Ghost In The Shell#bladerunner#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#acronym#acrnm#techwear
602 notes
·
View notes
Video
youtube
Fetching Omnicopter
#cyberpunk#bladerunner#blade runner#bladerunner2049#blade runner 2049#ghostintheshell#ghost in the shell#deusex#deus ex#drones#technology#neuromancer#artificialintelligence#artificial intelligence#alteredcarbon#altered carbon#whengravityfails#when gravity fails#three body problem#the dark forest#williamgibson#william gibson
274 notes
·
View notes
Photo

via 301_2015
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#acronym#acrnm#tokyo#neon#japan
321 notes
·
View notes
Photo

Marilyn Mugot
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#altered carbon#alteredcarbon#whengravityfails#when gravity fails#deusex#deus ex#china
16 notes
·
View notes
Link
There’s a revolution afoot, and you will know it by the stripes.
Earlier this year, a group of Berkeley researchers released a pair of videos. In one, a horse trots behind a chain link fence. In the second video, the horse is suddenly sporting a zebra’s black-and-white pattern. The execution isn’t flawless, but the stripes fit the horse so neatly that it throws the equine family tree into chaos.
Turning a horse into a zebra is a nice stunt, but that’s not all it is. It is also a sign of the growing power of machine learning algorithms to rewrite reality. Other tinkerers, for example, have used the zebrafication tool to turn shots of black bears into believable photos of pandas, apples into oranges, and cats into dogs. A Redditor used a different machine learning algorithm to edit porn videos to feature the faces of celebrities. At a new startup called Lyrebird, machine learning experts are synthesizing convincing audio from one-minute samples of a person’s voice. And the engineers developing Adobe’s artificial intelligence platform, called Sensei, are infusing machine learning into a variety of groundbreaking video, photo, and audio editing tools. These projects are wildly different in origin and intent, yet they have one thing in common: They are producing artificial scenes and sounds that look stunningly close to actual footage of the physical world. Unlike earlier experiments with AI-generated media, these look and sound real.
The technologies underlying this shift will soon push us into new creative realms, amplifying the capabilities of today’s artists and elevating amateurs to the level of seasoned pros. We will search for new definitions of creativity that extend the umbrella to the output of machines. But this boom will have a dark side, too. Some AI-generated content will be used to deceive, kicking off fears of an avalanche of algorithmic fake news. Old debates about whether an image was doctored will give way to new ones about the pedigree of all kinds of content, including text. You’ll find yourself wondering, if you haven’t yet: What role did humans play, if any, in the creation of that album/TV series/clickbait article?
A world awash in AI-generated content is a classic case of a utopia that is also a dystopia. It’s messy, it’s beautiful, and it’s already here.
Currently there are two ways to produce audio or video that resembles the real world. The first is to use cameras and microphones to record a moment in time, such as the original Moon landing. The second is to leverage human talent, often at great expense, to commission a facsimile. So if the Moon descent had been a hoax, a skilled film team would have had to carefully stage Neil Armstrong’s lunar gambol. Machine learning algorithms now offer a third option, by letting anyone with a modicum of technical knowledge algorithmically remix existing content to generate new material.
At first, deep-learning-generated content wasn’t geared toward photorealism. Google’s Deep Dreams, released in 2015, was an early example of using deep learning to crank out psychedelic landscapes and many-eyed grotesques. In 2016, a popular photo editing app called Prisma used deep learning to power artistic photo filters, for example turning snapshots into an homage to Mondrian or Munch. The technique underlying Prisma is known as style transfer: take the style of one image (such as The Scream) and apply it to a second shot.
Now the algorithms powering style transfer are gaining precision, signalling the end of the Uncanny Valley—the sense of unease that realistic computer-generated humans typically elicit. In contrast to the previous somewhat crude effects, tricks like zebrafication are starting to fill in the Valley’s lower basin. Consider the work from Kavita Bala’s lab at Cornell, where deep learning can infuse one photo’s style, such as a twinkly nighttime ambience, into a snapshot of a drab metropolis—and fool human reviewers into thinking the composite place is real. Inspired by the potential of artificial intelligence to discern aesthetic qualities, Bala cofounded a company called Grokstyle around this idea. Say you admired the throw pillows on a friend’s couch or a magazine spread caught your eye. Feed Grokstyle’s algorithm an image, and it will surface similar objects with that look.
“What I like about these technologies is they are democratizing design and style,” Bala says. “I’m a technologist—I appreciate beauty and style but can’t produce it worth a damn. So this work makes it available to me. And there’s a joy in making it available to others, so people can play with beauty. Just because we are not gifted on this certain axis doesn’t mean we have to live in a dreary land.”
At Adobe, machine learning has been a part of the company’s creative products for well over a decade, but only recently has AI become transformative. In October engineers working on Sensei, the company’s set of AI technologies, showed off a prospective video editing tool called Adobe Cloak, which allows its user to seamlessly remove, say, a lamppost from a video clip—a task that would ordinarily be excruciating for an experienced human editor. Another experiment, called Project Puppetron, applies an artistic style to a video in real time. For example, it can take a live feed of a person and render him as a chatty bronze statue or a hand-drawn cartoon. “People can basically do a performance in front of a web cam or any camera and turn that into animation, in real time,” says Jon Brandt, senior principal scientist and director of Adobe Research. (Sensei’s experiments don’t always turn into commercial products.)
Machine learning makes these projects possible because it can understand the parts of a face or the difference between foreground and background better than previous approaches in computer vision. Sensei’s tools let artists work with concepts, rather than the raw material. “Photoshop is great at manipulating pixels, but what people are trying to do is manipulate the content that is represented by the pixels,” Brandt explains.
That’s a good thing. When artists no longer waste their time wrangling individual dots on a screen, their productivity increases, and perhaps also their ingenuity, says Brandt. “I am excited about the possibility of new art forms emerging, which I expect will be coming.”
But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?
A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.
With fake reviews costing around $10 to $50 each from micro-task marketplaces, Yao figured it was just a matter of time before a motivated engineer tried to automate the process, driving down the price and kicking off a plague of false reviews. (He also explored using neural nets to defend a platform against fake content, with some success.) “As far as we know there are not any such systems, yet,” Yao says. “But maybe in five or ten years, we will be surrounded by AI-generated stuff.” His next target? Generating convincing news articles.
Progress on videos may move faster. Hany Farid, an expert at detecting fake photos and videos and a professor at Dartmouth, worries about how fast viral content spreads, and how slow the verification process is. Farid imagines a near future in which a convincing fake video of President Trump ordering the total nuclear annihilation of North Korea goes viral and incites panic, like a recast War of the Worlds for the AI era. “I try not to make hysterical predictions, but I don’t think this is far-fetched,” he says. “This is in the realm of what’s possible today.”
Fake Trump speeches are already circulating on the internet, a product of Lyrebird, the voice synthesis startup—though in the audio clips the company has shared with the public, Trump keeps his finger off the button, limiting himself to praising Lyrebird. Jose Sotelo, the company’s cofounder and CEO, argues that the technology is inevitable, so he and his colleagues might as well be the ones to do it, with ethical guidelines in place. He believes that the best defense, for now, is raising awareness of what machine learning is capable of. “If you were to see a picture of me on the moon, you would think it’s probably some image editing software,” Sotelo says. “But if you hear convincing audio of your best friend saying bad things about you, you might get worried. It’s a really new technology and a really challenging problem.”
Likely nothing can stop the coming wave of AI-generated content—if we even wanted to. At its worst, scammers and political operatives will deploy machine learning algorithms to generate untold volumes of misinformation. Because social networks selectively transmit the most attention-grabbing content, these systems’ output will evolve to be maximally likeable, clickable, and shareable.
But at its best, AI-generated content is likely to heal our social fabric in as many ways as it may rend it. Sotelo of Lyrebird dreams of how his company’s technology could restore speech to people who have lost their voice to diseases such as ALS or cancer. That horse-to-zebra video out of Berkeley? It was a side effect of work to improve how we train self-driving cars. Often, driving software is trained in virtual environments first, but a world like Grand Theft Auto only roughly resembles reality. The zebrafication algorithm was designed to shrink the distance between the virtual environment and the real world, ultimately making self-driving cars safer.
These are the two edges of the AI sword. As it improves, it mimics human actions more and more closely. Eventually, it has no choice but to become all too human: capable of good and evil in equal measure.
#cyberpunk#ghostintheshell#ghost in the shell#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#acronym#acrnm#artificialintelligence#aritificial intelligence#ai#technology#alphazero#alphago
42 notes
·
View notes
Link
During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.
As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.
“I find it remarkable that it is improving this rapidly,” said one Tesla owner.
Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.
Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.
[Learn more about thinking exponentially and the Law of Accelerating Returns.]
“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.
“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”
According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.
“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.
Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.
“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”
Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.
Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.
This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.
One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.
A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.
As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.
As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#technology#artificialintelligence#artificial intelligence#ai
14 notes
·
View notes
Photo

Krasnodar, Russia
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#acronym#acrnm#russia#scifi
38 notes
·
View notes
Link
MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells.
The cells are engineered to light up in response to a variety of stimuli. When mixed with a slurry of hydrogel and nutrients, the cells can be printed, layer by layer, to form three-dimensional, interactive structures and devices.
The team has then demonstrated its technique by printing a “living tattoo” — a thin, transparent patch patterned with live bacteria cells in the shape of a tree. Each branch of the tree is lined with cells sensitive to a different chemical or molecular compound. When the patch is adhered to skin that has been exposed to the same compounds, corresponding regions of the tree light up in response.
The researchers, led by Xuanhe Zhao, the Noyce Career Development Professor in MIT’s Department of Mechanical Engineering, and Timothy Lu, associate professor of biological engineering and of electrical engineering and computer science, say that their technique can be used to fabricate “active” materials for wearable sensors and interactive displays. Such materials can be patterned with live cells engineered to sense environmental chemicals and pollutants as well as changes in pH and temperature.
What’s more, the team developed a model to predict the interactions between cells within a given 3-D-printed structure, under a variety of conditions. The team says researchers can use the model as a guide in designing responsive living materials.
Zhao, Lu, and their colleagues have published their results today in the journal Advanced Materials. The paper’s co-authors are graduate students Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, and postdoc Cesar de la Fuente-Nunez.
A hardy alternative
In recent years, scientists have explored a variety of responsive materials as the basis for 3D-printed inks. For instance, scientists have used inks made from temperature-sensitive polymers to print heat-responsive shape-shifting objects. Others have printed photoactivated structures from polymers that shrink and stretch in response to light.
Zhao’s team, working with bioengineers in Lu’s lab, realized that live cells might also serve as responsive materials for 3D-printed inks, particularly as they can be genetically engineered to respond to a variety of stimuli. The researchers are not the first to consider 3-D printing genetically engineered cells; others have attempted to do so using live mammalian cells, but with little success.
“It turns out those cells were dying during the printing process, because mammalian cells are basically lipid bilayer balloons,” Yuk says. “They are too weak, and they easily rupture.”
Instead, the team identified a hardier cell type in bacteria. Bacterial cells have tough cell walls that are able to survive relatively harsh conditions, such as the forces applied to ink as it is pushed through a printer’s nozzle. Furthermore, bacteria, unlike mammalian cells, are compatible with most hydrogels — gel-like materials that are made from a mix of mostly water and a bit of polymer. The group found that hydrogels can provide an aqueous environment that can support living bacteria.
The researchers carried out a screening test to identify the type of hydrogel that would best host bacterial cells. After an extensive search, a hydrogel with pluronic acid was found to be the most compatible material. The hydrogel also exhibited an ideal consistency for 3-D printing.
“This hydrogel has ideal flow characteristics for printing through a nozzle,” Zhao says. “It’s like squeezing out toothpaste. You need [the ink] to flow out of a nozzle like toothpaste, and it can maintain its shape after it’s printed.”
From tattoos to living computers
Lu provided the team with bacterial cells engineered to light up in response to a variety of chemical stimuli. The researchers then came up with a recipe for their 3-D ink, using a combination of bacteria, hydrogel, and nutrients to sustain the cells and maintain their functionality.
“We found this new ink formula works very well and can print at a high resolution of about 30 micrometers per feature,” Zhao says. “That means each line we print contains only a few cells. We can also print relatively large-scale structures, measuring several centimeters.”
They printed the ink using a custom 3-D printer that they built using standard elements combined with fixtures they machined themselves. To demonstrate the technique, the team printed a pattern of hydrogel with cells in the shape of a tree on an elastomer layer. After printing, they solidified, or cured, the patch by exposing it to ultraviolet radiation. They then adhere the transparent elastomer layer with the living patterns on it, to skin.
To test the patch, the researchers smeared several chemical compounds onto the back of a test subject’s hand, then pressed the hydrogel patch over the exposed skin. Over several hours, branches of the patch’s tree lit up when bacteria sensed their corresponding chemical stimuli.
The researchers also engineered bacteria to communicate with each other; for instance they programmed some cells to light up only when they receive a certain signal from another cell. To test this type of communication in a 3-D structure, they printed a thin sheet of hydrogel filaments with “input,” or signal-producing bacteria and chemicals, overlaid with another layer of filaments of an “output,” or signal-receiving bacteria. They found the output filaments lit up only when they overlapped and received input signals from corresponding bacteria .
Yuk says in the future, researchers may use the team’s technique to print “living computers” — structures with multiple types of cells that communicate with each other, passing signals back and forth, much like transistors on a microchip.
“This is very future work, but we expect to be able to print living computational platforms that could be wearable,” Yuk says.
For more near-term applications, the researchers are aiming to fabricate customized sensors, in the form of flexible patches and stickers that could be engineered to detect a variety of chemical and molecular compounds. They also envision their technique may be used to manufacture drug capsules and surgical implants, containing cells engineered produce compounds such as glucose, to be released therapeutically over time.
“We can use bacterial cells like workers in a 3-D factory,” Liu says. “They can be engineered to produce drugs within a 3-D scaffold, and applications should not be confined to epidermal devices. As long as the fabrication method and approach are viable, applications such as implants and ingestibles should be possible.”
This research was supported, in part, by the Office of Naval Research, National Science Foundation, National Institutes of Health, and MIT Institute for Soldier Nanotechnologies.
#cyberpunk#ghostintheshell#ghost in the shell#bladerunner#blade runner#bladerunner2049#blade runner 2049#neuromancer#williamgibson#william gibson#alteredcarbon#altered carbon#whengravityfails#when gravity fails#deusex#deus ex#technology#MIT#biological engineering#materials science#synthetic biology
26 notes
·
View notes