#Brain-Computer Interface
Explore tagged Tumblr posts
Text


The patient Gert-Jan Oskam said the breakthrough had given him "a freedom that I did not have" before.
The 40-year-old Dutchman has been paralysed in his legs for more than a decade after suffering a spinal cord injury during a bicycle accident.
However, using a new system, he can now walk "naturally," take on difficult terrain, and even climb stairs, according to a study published in the journal Nature.
The advance is the result of more than a decade of work by a team of researchers in France and Switzerland.
Last year, the team showed that a spinal cord implant -- which sends electrical pulses to stimulate movement in leg muscles -- had allowed three paralysed patients to walk again.
But they needed to press a button to move their legs each time.
Gert-Jan, who also has the spinal implant, said this made it difficult to get into the rhythm of taking a "natural step."
'Digital bridge'
The latest research combines the spinal implant with new technology called a brain-computer interface, which is implanted above the part of the brain that controls leg movement.
"The interface uses algorithms based on artificial intelligence methods to decode brain recordings in real time," the researchers said.
This allows the interface, which was designed by researchers at France's Atomic Energy Commission (CEA), to work out how the patient wants to move their legs at any moment.
The data is transmitted to the spinal cord implant via a portable device that fits in a walker or small backpack, allowing patients to get around without help from others.
The two implants build what the researchers call a "digital bridge" to cross the disconnect between the spinal cord and brain that was created during Gert-Jan's accident.
"Now I can just do what I want -- when I decide to make a step the stimulation will kick in as soon as I think about it," Gert-Jan said.
After undergoing invasive surgery twice to implant both devices, "it has been a long journey to get here," he told a press conference in the Swiss city of Lausanne.
But among other changes, he is now able to stand at a bar again with friends while having a beer.
"This simple pleasure represents a significant change in my life," he said in a statement.

'Radically different'
Gregoire Courtine, a neuroscientist at Switzerland's Ecole Polytechnique Federale de Lausanne and a study co-author, said it was "radically different" from what had been accomplished before.
"Previous patients walked with a lot of effort -- now one just needs to think about walking to take a step," he told a press conference in the Swiss city of Lausanne.
There was another positive sign: following six months of training, Gert-Jan recovered some sensory perception and motor skills that he had lost in the accident.
He was even able to walk with crutches when the "digital bridge" was turned off.
Guillaume Charvet, a researcher at France's CEA, told AFP this suggests "that the establishment of a link between the brain and spinal cord would promote a reorganisation of the neuronal networks at the site of the injury."
So when could this technology be available to paralysed people around the world? Charvet cautioned it will take "many more years of research" to get to that point.
But the team are already preparing a trial to study whether this technology can restore function in arms and hands.
They also hope it could apply to other problems such as paralysis caused by stroke.
(AFP)
youtube

24 May 2023
#Youtube#Gert-Jan Oskam#spinal cord implant#brain-computer interface#Atomic Energy Commission (CEA)#digital bridge#Gregoire Courtine#Ecole Polytechnique Federale de Lausanne#Guillaume Charvet#spinal cord injury#Nature#paralysed patients
17 notes
·
View notes
Text
Weekly output: Bluesky verification, brain-computer interface, Xfinity Mobile, resisting FTC commissioners, Comcast pain points
View On WordPress
#blue check#blue checkmark#bluecheck#Bluesky#Bluesky verification#brain-computer interface#Comcast#Comcast rates#Federal Trade Commision#FTC#IAPP#NTT#NTT Upgrade 2025#pricing transparency#Rebecca Slaughter#wheelchair#Xfinity#Xfinity Mobile
0 notes
Text
Meta AI’s Big Announcements
New Post has been published on https://thedigitalinsider.com/meta-ais-big-announcements/
Meta AI’s Big Announcements
New AR glasses, Llama 3.2 and more.
Created Using Ideogram
Next Week in The Sequence:
Edge 435: Our series about SSMs continues discussing Hungry Hungry Hippos (H3) which has become one of the most important layers in SSM models. We review the original H3 paper and discuss Character.ai’s PromptPoet framework.
Edge 436: We review Salesforce recent work in models specialized in agentic tasks.
You can subscribe to The Sequence below:
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
📝 Editorial: Meta AI’s Big Announcements
Meta held its big conference, *Connect 2024*, last week, and AI was front and center. The two biggest headlines from the conference were the launch of the fully holographic Orion AI glasses, which represent one of the most important products in Meta’s ambitious and highly controversial AR strategy. In addition to the impressive first-generation Orion glasses, Meta announced that the company is developing a new brain-computer interface for the next version.
The other major release at the conference was Llama 3.2, which includes smaller language models of sizes 1B and 3B, as well as larger 11B and 90B vision models. This is Meta’s first major attempt to open source image models, signaling its strong commitment to open-source generative AI. Additionally, Meta AI announced the Llama Stack, which provides standard APIs in areas such as inference, memory, evaluation, post-training, and several other aspects required in Llama applications. With this release, Meta is transitioning Llama from isolated models to a complete stack for building generative AI apps.
There were plenty of other AI announcements at *Connect 2024*:
Meta introduced voice capabilities to its Meta AI chatbot, allowing users to have realistic conversations with the chatbot. This feature puts Meta AI on par with its competitors, like OpenAI and Google, which have already introduced voice modes to their products.
Meta announced an AI-powered, real-time language translation feature for its Ray-Ban smart glasses. This feature will allow users to translate text from Spanish, French, and Italian by the end of the year.
Meta is developing an AI feature for Instagram and Facebook Reels that will automatically dub and lip-sync videos into different languages. This feature is currently in testing in the US and Latin America.
Meta is adding AI image generation features to Facebook and Instagram. The new feature will be similar to existing AI image generators, such as Apple’s Image Playground, and will allow users to share AI-generated images with friends or create posts.
It was an impressive week for Meta AI, to say the least.
🔎 ML Research
AlphaProteo
Google DeepMind published a paper introducing AlphaProteo, a new family of model for protein design. The model is optimized for novel, high strength proteins that can improve our understanding of biological processes —> Read more.
Molmo and PixMo
Researchers from the Allen Institute for AI published a paper detailing Molmo and Pixmo, an open wegit and open data vision-language model(VLM). Molmo showcased how to train VLMs from scratch while Pixmo is the core set of datasets used during training —> Read more.
Instruction Following Without Instruction Tuning
Researchers from Stanford University published a paper detailing a technique called implicit instruction tuning that surfaces instruction following behaviors without explicity fine tuning the model. The paper also suggests some simple changes to a model distribution that can yield that implicity instruction tuning behavior —> Read more.
Robust Reward Model
Google DeepMind published a paper discussing some of the challenges of traditional reward models(RMs) to identify preferences in prompt indepdendent artifacts. The paper introduces the notion of robust reward model(RRM) that addresses this challenge and shows great improvements in models like Gemma —> Read more.
Real Time Notetaking
Researchers from Carnegie Mellon University published a paper outlining NoTeeline, a real time note generation method for video streams. NoTeeline generates micronotes that capture key points in a video while maintaining a consistent writing style —> Read more.
AI Watermarking
Researchers from Carnegie Mellon University published a paper evaluating different design choices in LLM watermarking. The paper also studies different attacks that result in the bypassing or removal of different watermarking techniques —> Read more.
🤖 AI Tech Releases
Llama 3.2
Meta open sourced Llama 3.2 small and medium size models —> Read more.
Llama Stack
As part of the Llama 3.2 release, Meta open sourced the Llama Stack, a series of standarized building blocks to develop Llama-powered applications —> Read more.
Gemini 1.5
Google released two updated Gemini models and new pricing and performance tiers —> Read more.
Cohere APIs
Cohere launched a new set of APIs that improve its experience for developers —> Read more.
🛠 Real World AI
Data Apps at Airbnb
Airbnb discusses Sandcastle, an internal framework that allow data scientists rapidly protype data driven apps —> Read more.
Feature Caching at Pinterest
The Pinterest engineering team discusses its internal architecture for feature caching in AI recommender systems —> Read more.
📡AI Radar
Meta introduced Orion, its very impressive augmented reality glasses.
James Cameron joined Stability AI’s Board of Directors.
The OpenAI soap opera continues with the resignation of their long time CTO and rumours of shifting its capped profit status.
OpenAI’s Chief Research Officer also resigned this week.
Letta, one of the most anticipated startups from UC Berkeley’s Sky Computing Lab, just came out of stealth mode with a $10 million round.
Image model platform Black Forest Labs is closing a new $100 million round.
Google announced a new $120 million fund dedicated to AI education.
Airtable unveiled a new suite of AI capabilities.
Enterprise AI startup Ensemble raised $3.3 million to improve the data quality problem for building models.
Microsoft unveiled its Trustworthy AI initiative.
Runway plans to allocate $5 million for producing AI generated films.
Data platform Airbyte can now create connectors directly from the API documentation.
Skills intelligence platform Workera unveiled a new agent that can assess, develop adn verify skills.
Convergence raised $12 million for building AI agents with long term memory.
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
#2024#agent#agents#ai#AI AGENTS#AI Chatbot#AI image#AI image generation#AI-powered#airtable#alphaproteo#America#Announcements#API#APIs#apple#applications#apps#ar#architecture#augmented reality#Behavior#Black Forest Labs#board#Brain#brain-computer interface#Building#Capture#Carnegie Mellon University#challenge
0 notes
Text
Controlling Amazon Alexa with Your Mind: A Breakthrough for ALS
Imagine controlling your Amazon Alexa with just your thoughts. For Mark, a 64-year-old living with amyotrophic lateral sclerosis (ALS), this has become a reality thanks to groundbreaking technology. ALS, a progressive disease affecting nerve cells in the brain and spinal cord, gradually robs individuals of muscle control. Despite still being able to walk and speak, Mark’s mobility is severely…
#ALSAdvances#AssistiveTech#BCIRevolution#braincomputerconnection#BrainTech#DigitalIndependence#EmpowerWithTech#FutureOfAccessibility#InnovativeAssistiveTech#MindControlTech#NeuroTechnology#SmartTech#TechBreakthrough#TechForDisability#TechForGood#ThoughtControl#Amazon Alexa#brain-computer interface#Health#People with paralysis#Science And Tech#Science News
0 notes
Photo


(via Tianjin University sets up first brain-computer interface program)
0 notes
Text
Pioneering BCI : Journey Before Neuralink
Pioneering BCI : Journey Before Neuralink @neosciencehub #neosciencehub #science #neuralink #neurotechnology #neurotech #neuroscience #BrownUniversity #scientific #BCI #JohnDonoghue #braincomputer #research #BlackrockNeurotech #NSH #BrainImplants
The recent success of Neuralink in implanting a brain-computer interface (BCI) in a human brain has captured the world’s attention. However, it’s crucial to recognize that this achievement stands on the shoulders of numerous pioneering efforts in the field of neurotechnology. This article explores the significant contributions made before Neuralink that have shaped the current landscape of BCI…
View On WordPress
#BCI Breakthroughs#BCI Development#Blackrock Neurotech#Brain Implants#Brain-Computer Interface#featured#John Donoghue#Neuralink#Neurological Innovation#Neuroscientific Advancements#Neurotechnology Pioneers#Precision Neuroscience#sciencenews
0 notes
Text
Next-Gen Tech: The Rise and Potential of Brain-Computer Interfaces (BCIs)
🧠💻 Mind meets machine: The stuff of sci-fi dreams is becoming real with Brain-Computer Interfaces! Imagine controlling tech with your thoughts. 🚀 Get ready to explore the limitless possibilities of BCIs. The future is now! #BCIs #Tech #ElonMusk
Imagine a world where you can make things happen with just your thoughts. As technology zooms ahead, something truly mind-blowing is emerging: Brain-Computer Interfaces, or BCIs. These amazing devices are like bridges between our minds and computers, creating a future where our thoughts and computers work together like never before. Let’s explore the incredible world of BCIs and see how they’re…

View On WordPress
#Brain signals#Brain Tech#Brain-Computer Interface#Bright Fututre#Digital Transformation#Elon Musk#Healthcare Advancements#Healthcare Technology#Innovation#Neuralink#Neurology
1 note
·
View note
Text
Unleashing the Power of the Mind: Revolutionary Brain-Computer Interface Unveiled!
Unleashing the Power of the Mind: Revolutionary Brain-Computer Interface Unveiled! #Brain #brain-computer
Brain-Computer Interface: In a groundbreaking development that could redefine the boundaries of human potential, scientists from the University of Cambridge have unveiled a state-of-the-art brain-computer interface (BCI) that promises to revolutionize how we interact with technology. This cutting-edge device, which has been hailed as a game-changer in the field of neurotechnology, allows users to…

View On WordPress
0 notes
Text
why neuroscience is cool
space & the brain are like the two final frontiers
we know just enough to know we know nothing
there are radically new theories all. the. time. and even just in my research assistant work i've been able to meet with, talk to, and work with the people making them
it's such a philosophical science
potential to do a lot of good in fighting neurological diseases
things like BCI (brain computer interface) and OI (organoid intelligence) are soooooo new and anyone's game - motivation to study hard and be successful so i can take back my field from elon musk
machine learning is going to rapidly increase neuroscience progress i promise you. we get so caught up in AI stealing jobs but yes please steal my job of manually analyzing fMRI scans please i would much prefer to work on the science PLUS computational simulations will soon >>> animal testing to make all drug testing safer and more ethical !! we love ethical AI <3
collab with...everyone under the sun - psychologists, philosophers, ethicists, physicists, molecular biologists, chemists, drug development, machine learning, traditional computing, business, history, education, literally try to name a field we don't work with
it's the brain eeeeee
#my motivation to study so i can be a cool neuroscientist#science#women in stem#academia#stem#stemblr#studyblr#neuroscience#stem romanticism#brain#psychology#machine learning#AI#brain computer interface#organoid intelligence#motivation#positivity#science positivity#cogsci#cognitive science
2K notes
·
View notes
Text
Love, Death & Robots - S1E1 - Sonnie's Edge (2019)
#love death and robots#ldar#scifi#3d animation#futuristic fashion#futurism#dystopian#cyberpunk aesthetic#cyberpunk art#cyberpunk#sci fi#science fiction#neon colors#neon aesthetic#neon noir#brain computer interface#neurotechnology#neuralink#gifs#gifset
401 notes
·
View notes
Text
Perhaps controversial, but: why the hell do people wanna download fics as EPUBs? I'd vastly rather they be PDFs
#which is funny b/c i grew up with a kindle so I have a lot of experience with the 'page flipping' format epub uses#...OTOH part of it may be the fact epubs AREN'T exactly formatted like the kindle and my brain wigs out about it?#b/c yeah i just hate the two-screen form epub uses; i'd rather just have the infinite scroll a pdf provides#if/when i still used my kindle and downloaded fic to it that was a different story; but on phone or computer? pdf 4 life#this is me#the monkey speaks#discourse and discussion (user interface)#discourse and discussion (fanfiction)
20 notes
·
View notes
Text

41 notes
·
View notes
Text
Major Breakthrough in Telepathic Human-AI Communication: MindSpeech Decodes Seamless Thoughts into Text
New Post has been published on https://thedigitalinsider.com/major-breakthrough-in-telepathic-human-ai-communication-mindspeech-decodes-seamless-thoughts-into-text/
Major Breakthrough in Telepathic Human-AI Communication: MindSpeech Decodes Seamless Thoughts into Text
In a revolutionary leap forward in human-AI interaction, scientists at MindPortal have successfully developed MindSpeech, the first AI model capable of decoding continuous imagined speech into coherent text without any invasive procedures. This advancement marks a significant milestone in the quest for seamless, intuitive communication between humans and machines.
The Pioneering Study: Non-Invasive Thought Decoding
The research, conducted by a team of leading experts and published on arXiv and ResearchGate, demonstrates how MindSpeech can decode complex, free-form thoughts into text under controlled test conditions. Unlike previous efforts that required invasive surgery or were limited to simple, memorized verbal cues, this study shows that AI can dynamically interpret imagined speech from brain activity non-invasively.
Researchers employed a portable, high-density Functional Near-Infrared Spectroscopy (fNIRS) system to monitor brain activity while participants imagined sentences across various topics. The novel approach involved a ‘word cloud’ task, where participants were presented with words and asked to imagine sentences related to these words. This task covered over 90% of the most frequently used words in the English language, creating a rich dataset of 433 to 827 sentences per participant, with an average length of 9.34 words.
Leveraging Advanced AI: Llama2 and Brain Signals
The AI component of MindSpeech was powered by the Llama2 Large Language Model (LLM), a sophisticated text generation tool guided by brain signal-generated embeddings. These embeddings were created by integrating brain signals with context input text, allowing the AI to generate coherent text from imagined speech.
Key metrics such as BLEU-1 and BERT P scores were used to evaluate the accuracy of the AI model. The results were impressive, showing statistically significant improvements in decoding accuracy for three out of four participants. For example, Participant 1’s BLEU-1 score was significantly higher at 0.265 compared to 0.224 with permuted inputs, with a p-value of 0.004, indicating a robust performance in generating text closely aligned with the imagined thoughts.
Brain Activity Mapping and Model Training
The study also mapped brain activity related to imagined speech, focusing on areas like the lateral temporal cortex, dorsolateral prefrontal cortex (DLPFC), and visual processing areas in the occipital region. These findings align with previous research on speech encoding and underscore the feasibility of using fNIRS for non-invasive brain monitoring.
Training the AI model involved a complex process of prompt tuning, where the brain signals were transformed into embeddings that were then used to guide text generation by the LLM. This approach enabled the generation of sentences that were not only linguistically coherent but also semantically similar to the original imagined speech.
A Step Toward Seamless Human-AI Communication
MindSpeech represents a groundbreaking achievement in AI research, demonstrating for the first time that it is possible to decode continuous imagined speech from the brain without invasive procedures. This development paves the way for more natural and intuitive communication with AI systems, potentially transforming how humans interact with technology.
The success of this study also highlights the potential for further advancements in the field. While the technology is not yet ready for widespread use, the findings provide a glimpse into a future where telepathic communication with AI could become a reality.
Implications and Future Research
The implications of this research are vast, from enhancing assistive technologies for individuals with communication impairments to opening new frontiers in human-computer interaction. However, the study also points out the challenges that lie ahead, such as improving the sensitivity and generalizability of the AI model and adapting it to a broader range of users and applications.
Future research will focus on refining the AI algorithms, expanding the dataset with more participants, and exploring real-time applications of the technology. The goal is to create a truly seamless and universal brain-computer interface that can decode a wide range of thoughts and ideas into text or other forms of communication.
Conclusion
MindSpeech is a pioneering breakthrough in human-AI communication, showcasing the incredible potential of non-invasive brain computer interfaces.
Readers who wish to learn more about this company should read our interview with Ekram Alam, CEO and Co-founder of MindPortal, where we discuss how MindPortal is interfacing with Large Language Models through mental processes.
#ai#ai model#AI research#AI systems#Algorithms#applications#approach#BERT#Brain#brain activity#brain signals#brain-computer interface#brain-machine interface#CEO#Cloud#communication#computer#continuous#development#embeddings#employed#English#focus#form#Forms#Future#how#human#Human-computer interaction#humans
0 notes
Text
Its the year 2050. I think about making a grilled cheese but my neuralink causes 14 ads to start playing over the recipe.
9 notes
·
View notes
Note
15 for the ultra-processed disability ask thing!
Ok for reference that's
What’s something your disability has stopped you from learning or doing?
Fun story!
For my Ph.D. I studied brain computer interfaces. Specifically P300 brain computer interfaces. Here's my dissertation, for reference.
As far as I can tell, most people studying brain computer interfaces will ever use them. It might not be for their own communication needs (I'm a a part-time AAC user but I do just fine using my hands to access my AAC; I don't need a brain computer interface.) But if you work in a lab with brain computer interfaces, you're probably going to get called on to test that, like, new settings do what they say they do / that it's still possible to make selections / that the data collection method actually collects the data by using the interface. Plus we get tapped to be "neurotypical controls" (what my papers say) and/or "healthy controls" (what a lot of other people's papers say; I've also seen people alternate between the two) on a semi-regular basis, if we're close enough on whatever other demographics they're matching on to be reasonable.
I'm obviously not a neurotypical control. But I also cannot use a P300 brain computer interface, because that's basically dependent on flashing lights a bunch of times per second. So I got a Ph.D. studying a kind of brain computer interface that I cannot use.
(There are other kinds, some of which I can use. One particularly desperate masters student working with motor imagery brain computer interfaces tried to change his thesis wording to "controls without Parkinson's" so he could use me as a control. Pretty sure his supervisors made him take me back out though; his final thesis says both neurotypical control and healthy control.)
5 notes
·
View notes
Text
Neuralink's Human Trials: Regulatory Hurdles of Neurotechnology
Neuralink's Human Trials: Regulatory Hurdles of Neurotechnology @neosciencehub #neosciencehub #science #neuralink #humantrails #neurotechnology #elonmusk #FDA #healthcare #medicalscience #ClinicalResearch #health #AITech #BrainComputer #DataPrivacy #NSH
The journey of Neuralink, Elon Musk’s ambitious neurotechnology venture, to its first human trials represents a significant achievement in the field of biomedical innovation. However, this path was not without its challenges. Neo Science Hub’s Scientific Advisory Team examines the intricate regulatory landscape that companies like Neuralink must navigate, highlighting the complex interplay of…
View On WordPress
#Biomedical Ethics#Brain-Computer Interface#Clinical Trials#Data Security#Elon Musk#Ethical Technology#FDA Approval#featured#Health Law#Medical Innovation#Neuralink#Neuroscientific Research#Neurotechnology Regulation#Patient Safety#Regulatory Compliance#sciencenews#Technological Advancements
0 notes