#Computational Neuroscience
Explore tagged Tumblr posts
Text
Interesting Reviews for Week 13, 2025
Neural circuits for goal-directed navigation across species. Basu, J., & Nagel, K. (2024). Trends in Neurosciences, 47(11), 904–917.
Neural Network Excitation/Inhibition: A Key to Empathy and Empathy Impairment. Tang, Y., Wang, C., Li, Q., Liu, G., Song, D., Quan, Z., Yan, Y., & Qing, H. (2024). The Neuroscientist, 30(6), 644–665.
Event perception and event memory in real-world experience. Bailey, H., & Smith, M. E. (2024). Nature Reviews Psychology, 3(11), 754–766.
Plasticity of Dendritic Spines Underlies Fear Memory. Choi, J. E., & Kaang, B.-K. (2024). The Neuroscientist, 30(6), 690–703.
#neuroscience#science#research#brain science#scientific publications#cognitive science#reviews#neurobiology#cognition#psychophysics#computational neuroscience
27 notes
·
View notes
Text
Discrete topological spaces and place field maps (Babichev, Andrey & Dabaghian, Yuri. (2017). Topological Schemas of Memory Spaces. Frontiers in Computational Neuroscience. 12. 10.3389/fncom.2018.00027.)
24 notes
·
View notes
Text
i unironically love public speaking like i'm SPEAKING in PUBLIC and you have to LISTEN TO ME and i am SAYING THINGS and the things are about SCIENCE
#presenting my research in a few days#science#women in stem#stem#stemblr#studyblr#research#student#school#public speaking#neuroscience#computer science#computational neuroscience
19 notes
·
View notes
Text
Zoomposium with Dr. Gabriele Scheler: “The language of the brain - or how AI can learn from biological language models”

In another very exciting interview from our Zoomposium themed blog “#Artificial #intelligence and its consequences”, Axel and I talk this time to the German computer scientist, AI researcher and neuroscientist Gabriele Scheler, who has been living and researching in the USA for some time. She is co-founder and research director at the #Carl #Correns #Foundation for Mathematical Biology in San José, USA, which was named after her famous German ancestor Carl Correns. Her research there includes #epigenetic #influences using #computational #neuroscience in the form of #mathematical #modeling and #theoretical #analysis of #empirical #data as #simulations. Gabriele contacted me because she had come across our Zoomposium interview “How do machines think? with #Konrad #Kording and wanted to conduct an interview with us based on her own expertise. Of course, I was immediately enthusiastic about this idea, as the topic of “#thinking vs. #language” had been “hanging in the air” for some time and had also led to my essay “Realists vs. nominalists - or the old dualism ‘thinking vs. language’” (https://philosophies.de/index.php/2024/07/02/realisten-vs-nominalisten/).
In addition, we often talked to #AI #researchers in our Zoomposium about the extent to which the development of “#Large #Language #Models (#LLM)”, such as #ChatGPT, does not also say something about the formation and use of language in the human #brain. In other words, it is actually about the old question of whether we can think without #language or whether #cognitive #performance is only made possible by the formation and use of language. Interestingly, this question is being driven forward by #AI #research and #computational #neuroscience. Here, too, a gradual “#paradigm #shift” is emerging, moving away from the purely information-technological, mechanistic, purely data-driven “#big #data” concept of #LLMs towards increasingly information-biological, polycontextural, structure-driven “#artificial #neural #networks (#ANN)” concepts. This is exactly what I had already tried to describe in my earlier essay “The system needs new structures” (https://philosophies.de/index.php/2021/08/14/das-system-braucht-neue-strukturen/).
So it was all the more obvious that we should talk to Gabriele, a proven expert in the fields of #bioinformatics, #computational #linguistics and #computational #neuroscience, in order to clarify such questions. As she comes from both fields (linguistics and neuroscience), she was able to answer our questions in our joint interview. More at: https://philosophies.de/index.php/2024/11/18/sprache-des-gehirns/
or: https://youtu.be/forOGk8k0W8
#artificial consciousness#artificial intelligence#ai#neuroscience#consciousness#artificial neural networks#large language model#chatgpt#bioinformatics#computational neuroscience
2 notes
·
View notes
Text
The Elegant Math of Machine Learning
Anil Ananthaswamy’s 3 Greatest Revelations While Writing Why Machines Learn.
— By Anil Ananthaswamy | July 23, 2024

Image: Aree S., Shutterstock
1- Machines Can Learn!
A few years ago, I decided I needed to learn how to code simple machine learning algorithms. I had been writing about machine learning as a journalist, and I wanted to understand the nuts and bolts. (My background as a software engineer came in handy.) One of my first projects was to build a rudimentary neural network to try to do what astronomer and mathematician Johannes Kepler did in the early 1600s: analyze data collected by Danish astronomer Tycho Brahe about the positions of Mars to come up with the laws of planetary motion.
I quickly discovered that an artificial neural network—a type of machine learning algorithm that uses networks of computational units called artificial neurons—would require far more data than was available to Kepler. To satisfy the algorithm’s hunger, I generated a decade worth of data about the daily positions of planets using a simple simulation of the solar system.
After many false starts and dead-ends, I coded a neural network that—given the simulated data—could predict future positions of planets. It was beautiful to observe. The network indeed learned the patterns in the data and could prognosticate about, say, where Mars might be in five years.

Functions of the Future: Given enough data, some machine learning algorithms can approximate just about any sort of function—whether converting x into y or a string of words into a painterly illustration—author Anil Ananthaswamy found out while writing his new book, Why Machines Learn: The Elegant Math Behind Modern AI. Photo courtesy of Anil Ananthaswamy.
I was instantly hooked. Sure, Kepler did much, much more with much less—he came up with overarching laws that could be codified in the symbolic language of math. My neural network simply took in data about prior positions of planets and spit out data about their future positions. It was a black box, its inner workings undecipherable to my nascent skills. Still, it was a visceral experience to witness Kepler’s ghost in the machine.
The project inspired me to learn more about the mathematics that underlies machine learning. The desire to share the beauty of some of this math led to Why Machines Learn.
2- It’s All (Mostly) Vectors.
One of the most amazing things I learned about machine learning is that everything and anything—be it positions of planets, an image of a cat, the audio recording of a bird call—can be turned into a vector.
In machine learning models, vectors are used to represent both the input data and the output data. A vector is simply a sequence of numbers. Each number can be thought of as the distance from the origin along some axis of a coordinate system. For example, here’s one such sequence of three numbers: 5, 8, 13. So, 5 is five steps along the x-axis, 8 is eight steps along the y-axis and 13 is 13 steps along the z-axis. If you take these steps, you’ll reach a point in 3-D space, which represents the vector, expressed as the sequence of numbers in brackets, like this: [5 8 13].
Now, let’s say you want your algorithm to represent a grayscale image of a cat. Well, each pixel in that image is a number encoded using one byte or eight bits of information, so it has to be a number between zero and 255, where zero means black and 255 means white, and the numbers in-between represent varying shades of gray.
It was a visceral experience to witness Kepler’s ghost in the machine.
If it’s a 100×100 pixel image, then you have 10,000 pixels in total in the image. So if you line up the numerical values of each pixel in a row, voila, you have a vector representing the cat in 10,000-dimensional space. Each element of that vector represents the distance along one of 10,000 axes. A machine learning algorithm encodes the 100×100 image as a 10,000-dimensional vector. As far as the algorithm is concerned, the cat has become a point in this high-dimensional space.
Turning images into vectors and treating them as points in some mathematical space allows a machine learning algorithm to now proceed to learn about patterns that exist in the data, and then use what it’s learned to make predictions about new unseen data. Now, given a new unlabeled image, the algorithm simply checks where the associated vector, or the point formed by that image, falls in high-dimensional space and classifies it accordingly. What we have is one, very simple type of image recognition algorithm: one which learns, given a bunch of images annotated by humans as that of a cat or a dog, how to map those images into high-dimensional space and use that map to make decisions about new images.
3- Some Machine Learning Algorithms Can Be “Universal Function Approximators.”
One way to think about a machine learning algorithm is that it converts an input, x, into an output, y. The inputs and outputs can be a single number or a vector. Consider y = f (x). Here, x could be a 10,000-dimensional vector representing a cat or a dog, and y could be 0 for cat and 1 for dog, and it’s the machine learning algorithm’s job to find, given enough annotated training data, the best possible function, f, that converts x to y.
There are mathematical proofs that show that certain machine learning algorithms, such as deep neural networks, are “universal function approximators,” capable in principle of approximating any function, no matter how complex.
Voila, You Have A Vector Representing The Cat In 10,000-Dimensional Space.
A deep neural network has layers of artificial neurons, with an input layer, an output layer, and one or more so-called hidden layers, which are sandwiched between the input and output layers. There’s a mathematical result called universal approximation theorem that shows that given an arbitrarily large number of neurons, even a network with just one hidden layer can approximate any function, meaning: If a correlation exists in the data between the input and the desired output, then the neural network will be able to find a very good approximation of a function that implements this correlation.
This is a profound result, and one reason why deep neural networks are being trained to do more and more complex tasks, as long as we can provide them with enough pairs of input-output data and make the networks big enough.
So, whether it’s a function that takes an image and turns that into a 0 (for cat) and 1 (for dog), or a function that takes a string of words and converts that into an image for which those words serve as a caption, or potentially even a function that takes the snapshot of the road ahead and spits out instructions for a car to change lanes or come to a halt or some such maneuver, universal function approximators can in principle learn and implement such functions, given enough training data. The possibilities are endless, while keeping in mind that correlation does not equate to causation.
— Anil Ananthaswamy is a Science Journalist who writes about AI and Machine Learning, Physics, and Computational Neuroscience. He’s a 2019-20 MIT Knight Science Journalism Fellow. His latest book is Why Machines Learn: The Elegant Math Behind Modern AI.
#Nautilus#Mathematics#Elegant Math#Machine Learning#Mathematics | Mostly Vectors#Algorithms | “Universal Function Approximators”#Anil Ananthaswamy#Physics#Computational Neuroscience#MIT | Knight Science Journalism Fellow
3 notes
·
View notes
Text
Spring Reading -nonfiction
Currently reading , “Being You: A New Science of Consciousness” by Anil Seth


0 notes
Text
Computational Technologies for Healthcare

In the modern-era technologies are evolving at a faster rate, resulting in a transformation of our lives, particularly in healthcare, with the objective of saving humanity while also providing advanced solutions for practitioners to enhance their choices. Artificial intelligence, Big Data Analytics, Internet of Things, and machine learning all have significant implications in accomplishing this. This article begins by outlining contemporary computational technologies in healthcare before discussing their significance and use. The technologies that are assisting healthcare to improve will indeed be discussed in this article, which will then explore the several research possibilities and obstacles associated with implementing computational technology in healthcare.
Introduction:
Health care has greatly improved aid to automated systems assisting clinicians in making precise diagnoses and a better organizing of information system. Machine learning, artificial intelligence (AI), the Internet of Things, and data analytics are all important in the healthcare business. Machine learning algorithms, for an instance, detect heart disease and cancer at their earliest stages in addition to assist practitioners in making appropriate decisions and diagnoses. With the advancement of AI algorithms, the medical sector is gaining assistance for more successful surgery. Data analytics and IoT plan to handle data effectively and remotely, allowing doctors to access patient reports from anywhere and at any time. More significantly, to forecast health outcomes and make informed treatment plan decisions. This had also improved as a result of counselling patients to visit physicians depending on their health status. Fig. 1 depicts the several aspects of emerging computing technologies.
Healthcare is an ever-changing and dynamically evolving industry. It is critical to stay up to date on the latest trends and advances, particularly for professionals or for the general public. There are numerous elements at work, including the obstacles, changing lifestyle, extended lifespan, evolving health care systems, and digitization. Everything has led to a rapidly changing healthcare scene. Thus, learning about upcoming trends can assist organizations in analyzing, identifying discrepancies and gaps, and resolving those difficulties. We can also estimate what the main players in the healthcare industry will do this year. Healthcare technology trends possess the potential to significantly transform the industry.
Related Work:
Various advancements in how we identify, prevent, and diagnose diseases have driven the healthcare business during the previous decade. Without the phenomenal growth of technology fueled by AI and the digital transformation of healthcare services in response to tougher global conditions and increasing needs for accessible and excellent medical care, this should not have occurred. From anaesthetics and antibiotics to MRI scanners and radioactive therapy, technical advancements in healthcare have resulted in profound transformations. While technologies, the latest pharmaceuticals and treatments, novel devices, updated social media helping with healthcare, etc., will fuel innovation, human aspects continue to be one of the consistent limitations of breakthroughs. No prediction will delight every person; rather, this article intends to investigate snippets of the future to see ways in which we can speculate a bit more clearly about the best way to go where we would like to go.
The intent of this article is to explain the tactical approach to transformation. The use of digital health technologies in healthcare organizations is more significant because improving healthcare, exchange of information, and the efficiency of systems, particularly in developing countries, is crucial to achieving the Millennium Development Goals. The purpose of the research is to highlight the variables of growing technology usage that influence the transformation of healthcare. This technological innovation effectively manages the records needed to record transactions and procedures. Healthcare providers have little choice but to adapt to the fast-moving technological change in the health industry if they want to stay competitive. Artificial Intelligence, Internet of Things, Big Data, Machine Learning and Cloud Computing. It is crucial that those who must use the technology accept it and gain a deeper grasp of the elements that contribute to the systems' positive effects in order to enhance the deployment process. Essential to society's health and the standard of care. Therefore, these justifications suggest that healthcare providers are currently adopting digital transformation in healthcare units. The advantages of digital transformation in healthcare organizations are very strong. They consist of decreased travel expenses for employees, less time spent away from patients, and simpler information system wait times in the healthcare industry.
In recent days, the entire nation has been pushed down by Corona virus. Techniques using both traditional and cutting-edge technologies are required to combat COVID-19 and bring the situation under control. The article focuses to methodically examine current developments in smart healthcare technologies, such as big data analytics and artificial intelligence, which will ultimately rescue the planet. By creating connected frameworks, these intelligence-based solutions support creative managements, versatility, productivity, and efficacy. The research being investigated specifically addresses Big Data and AI contributions that ought to be incorporated into intelligent healthcare systems. Additionally, it investigates how big data analytics and artificial intelligence are used to provide users with information and assist them in making plans. Finally, it proposes models for intelligent healthcare systems that utilize big data analytics and AI
According to the Heart Attack & Stroke Stats 2021 from the American Heart Association (AHA), coronary artery disease and stroke remain the leading cause of death globally and are spreading at an alarming rate. The current coronavirus (COVID-19) epidemic has made this increase much worse and put more strain on the already overburdened healthcare system. The common healthcare issues can be resolved using Smart and Connected Health. Figure 2 outlines the most significant technology. Through incorporating services that are worth more, medical care can become more proactive, preventative in nature, and individualized. Effective use of intelligent health care facilities was made at several pandemic response stages, encompassing identifying diseases, viral being detected, individual observing, tracking, and regulating, as well as the allocation of resources [2]. It's critical to monitor the developments affecting technology in health care as we advance. Sophisticated hospitals and healthcare facilities rely significantly on legacy hardware and software, but it's important to consider how this equipment might be merged with contemporary hardware or ultimately replaced by more reliable ones. Without sacrificing predictability or connectivity, enhancements in efficiency, earnings, reliability, and privacy should take precedence.
Computing Technologies:
The trending technologies in the healthcare domain are discussed below and the digital transformation is depicted.
Artificial Intelligence:
Healthcare uses artificial intelligence to assess and reduce several diseases' treatments. In a number of medical environments, including diagnosis processes, companies that manufacture medicine, portable medical facilities, etc., Intelligence is used. Intelligence assists in the gathering of historical information for illness identification and avoidance in the healthcare industry through digital health records. Besides pandemic therapy and consequence, artificial intelligence has a variety of other applications. The speed at which data is analyzed and judgements are made has been greatly accelerated by AI. Machine learning has a significant impact on the medical sector's capacity to develop new drugs and enhance the efficacy of testing methods. The examination of CT scans is currently aided by artificial intelligence in order to detect pneumonia in COVID-19 treatment patients. Microsoft developed Project Inner Eye, an artificial intelligence radiological technology. It now takes only a matter of seconds instead of hours to complete the patient's 3D sculpting. To gather biomedical dissertations from PubMed, Microsoft has developed another artificial intelligence dubbed Project Hanover. It also helps in selecting the best treatments for every individual, as well as speeding up the cancer diagnosis process.
Read More: https://www.europeanhhm.com/articles/computational-technologies-for-healthcare
#healthcare#health#medical care#health and wellness#technologies#artifical intelligence#iotsolutions#computational neuroscience#hospitals
1 note
·
View note
Text
Watch Rupert, an AI, Learn to Play Super Mario Live on TikTok
On TikTok, between the “get ready with me” videos, life hacks, and memes, a few robots are working on a challenge that many of us have faced at some point in our lives: beating Super Mario World. Over the past week, users have been live streaming an AI’s attempts to learn to play Mario, and for one robot in particular, it’s going great. Its name is Rupert, and it just beat level 2. Generating…
View On WordPress
#Artificial general intelligence#artificial intelligence#chatgpt#Computational neuroscience#George#Gizmodo#Mario#Nintendo#OPENAI#Rupert#Seth Hendrickson#SethBling
1 note
·
View note
Text
Interesting Papers for Week 3, 2025
Synaptic weight dynamics underlying memory consolidation: Implications for learning rules, circuit organization, and circuit function. Bhasin, B. J., Raymond, J. L., & Goldman, M. S. (2024). Proceedings of the National Academy of Sciences, 121(41), e2406010121.
Characterization of the temporal stability of ToM and pain functional brain networks carry distinct developmental signatures during naturalistic viewing. Bhavna, K., Ghosh, N., Banerjee, R., & Roy, D. (2024). Scientific Reports, 14, 22479.
Connectomic reconstruction predicts visual features used for navigation. Garner, D., Kind, E., Lai, J. Y. H., Nern, A., Zhao, A., Houghton, L., … Kim, S. S. (2024). Nature, 634(8032), 181–190.
Socialization causes long-lasting behavioral changes. Gil-Martí, B., Isidro-Mézcua, J., Poza-Rodriguez, A., Asti Tello, G. S., Treves, G., Turiégano, E., … Martin, F. A. (2024). Scientific Reports, 14, 22302.
Neural pathways and computations that achieve stable contrast processing tuned to natural scenes. Gür, B., Ramirez, L., Cornean, J., Thurn, F., Molina-Obando, S., Ramos-Traslosheros, G., & Silies, M. (2024). Nature Communications, 15, 8580.
Lack of optimistic bias during social evaluation learning reflects reduced positive self-beliefs in depression and social anxiety, but via distinct mechanisms. Hoffmann, J. A., Hobbs, C., Moutoussis, M., & Button, K. S. (2024). Scientific Reports, 14, 22471.
Causal involvement of dorsomedial prefrontal cortex in learning the predictability of observable actions. Kang, P., Moisa, M., Lindström, B., Soutschek, A., Ruff, C. C., & Tobler, P. N. (2024). Nature Communications, 15, 8305.
A transient high-dimensional geometry affords stable conjunctive subspaces for efficient action selection. Kikumoto, A., Bhandari, A., Shibata, K., & Badre, D. (2024). Nature Communications, 15, 8513.
Presaccadic Attention Enhances and Reshapes the Contrast Sensitivity Function Differentially around the Visual Field. Kwak, Y., Zhao, Y., Lu, Z.-L., Hanning, N. M., & Carrasco, M. (2024). eNeuro, 11(9), ENEURO.0243-24.2024.
Transformation of neural coding for vibrotactile stimuli along the ascending somatosensory pathway. Lee, K.-S., Loutit, A. J., de Thomas Wagner, D., Sanders, M., Prsa, M., & Huber, D. (2024). Neuron, 112(19), 3343-3353.e7.
Inhibitory plasticity supports replay generalization in the hippocampus. Liao, Z., Terada, S., Raikov, I. G., Hadjiabadi, D., Szoboszlay, M., Soltesz, I., & Losonczy, A. (2024). Nature Neuroscience, 27(10), 1987–1998.
Third-party punishment-like behavior in a rat model. Mikami, K., Kigami, Y., Doi, T., Choudhury, M. E., Nishikawa, Y., Takahashi, R., … Tanaka, J. (2024). Scientific Reports, 14, 22310.
The morphospace of the brain-cognition organisation. Pacella, V., Nozais, V., Talozzi, L., Abdallah, M., Wassermann, D., Forkel, S. J., & Thiebaut de Schotten, M. (2024). Nature Communications, 15, 8452.
A Drosophila computational brain model reveals sensorimotor processing. Shiu, P. K., Sterne, G. R., Spiller, N., Franconville, R., Sandoval, A., Zhou, J., … Scott, K. (2024). Nature, 634(8032), 210–219.
Decision-making shapes dynamic inter-areal communication within macaque ventral frontal cortex. Stoll, F. M., & Rudebeck, P. H. (2024). Current Biology, 34(19), 4526-4538.e5.
Intrinsic Motivation in Dynamical Control Systems. Tiomkin, S., Nemenman, I., Polani, D., & Tishby, N. (2024). PRX Life, 2(3), 033009.
Coding of self and environment by Pacinian neurons in freely moving animals. Turecek, J., & Ginty, D. D. (2024). Neuron, 112(19), 3267-3277.e6.
The role of training variability for model-based and model-free learning of an arbitrary visuomotor mapping. Velázquez-Vargas, C. A., Daw, N. D., & Taylor, J. A. (2024). PLOS Computational Biology, 20(9), e1012471.
Rejecting unfairness enhances the implicit sense of agency in the human brain. Wang, Y., & Zhou, J. (2024). Scientific Reports, 14, 22822.
Impaired motor-to-sensory transformation mediates auditory hallucinations. Yang, F., Zhu, H., Cao, X., Li, H., Fang, X., Yu, L., … Tian, X. (2024). PLOS Biology, 22(10), e3002836.
#science#scientific publications#neuroscience#research#brain science#cognitive science#neurobiology#cognition#psychophysics#neural computation#computational neuroscience#neural networks#neurons
29 notes
·
View notes
Text

12.09.24 — sitting in my pink armchair… occasionally sipping on a lukewarm diet coke while typing away & grinding out my computational neuro lab write up that’s due tonight.
orgo 1 is finally done for me, but my cell bio final is on thursday.
good luck to everyone with finals!!
#premed#studyblr#college#study motivation#studyspo#study blog#organic chemistry#orgo#undergrad student#biology#neuroscience#neuro#coding#computer science
25 notes
·
View notes
Text
why neuroscience is cool
space & the brain are like the two final frontiers
we know just enough to know we know nothing
there are radically new theories all. the. time. and even just in my research assistant work i've been able to meet with, talk to, and work with the people making them
it's such a philosophical science
potential to do a lot of good in fighting neurological diseases
things like BCI (brain computer interface) and OI (organoid intelligence) are soooooo new and anyone's game - motivation to study hard and be successful so i can take back my field from elon musk
machine learning is going to rapidly increase neuroscience progress i promise you. we get so caught up in AI stealing jobs but yes please steal my job of manually analyzing fMRI scans please i would much prefer to work on the science PLUS computational simulations will soon >>> animal testing to make all drug testing safer and more ethical !! we love ethical AI <3
collab with...everyone under the sun - psychologists, philosophers, ethicists, physicists, molecular biologists, chemists, drug development, machine learning, traditional computing, business, history, education, literally try to name a field we don't work with
it's the brain eeeeee
#my motivation to study so i can be a cool neuroscientist#science#women in stem#academia#stem#stemblr#studyblr#neuroscience#stem romanticism#brain#psychology#machine learning#AI#brain computer interface#organoid intelligence#motivation#positivity#science positivity#cogsci#cognitive science
2K notes
·
View notes
Text
Zoomposium with Professor Dr. Petra Ritter: "The simulation of brains"

In another installment in our "Zoomposium Series" on the topic of "Brain Research", my colleague Axel Stöcker of the "Blog der großen Fragen" and I had the great honor and pleasure of conducting an interview with the very well-known and renowned German medical doctor and neuroscientist Professor Dr. Petra Ritter.
In this context, Ms. Ritter became a co-founder and leader of the co-design project "The Virtual Brain", which is a component of the European Open Science Cloud (EOSC) and is "a neuroinformatics platform for simulating whole brain networks using biologically realistic connectivity".
She is leading the development of a virtual research environment as a collaborative research platform for sensitive health data and head of the "German National Neuroscience Research Infrastructure Initiative (NFDI-Neuroscience)" and involved in the development of the "Health Data Cloud EBRAINS".
Petra Ritter has been Johanna Quandt Professor and Head of the Section for Brain Simulation at the Department of Neurology with Experimental Neurology at Charité - Universitätsmedizin Berlin since 2017.
There, Professor Ritter and her team are involved in the "Simulation of Brains".
More at: https://philosophies.de/index.php/2023/09/17/die-simulation-von-gehirnen/
or: https://youtu.be/XrTWh0n8yDY
#brain#simulation#brain simulation#ebrain#virtual brain#neuroscience#computational neuroscience#neurology
2 notes
·
View notes
Text
Our brains are remarkably energy efficient. Using just 20 watts of power, the human brain is capable of processing the equivalent of an exaflop — or a billion-billion mathematical operations per second. Now, researchers in Australia are building what will be the world's first supercomputer that can simulate networks at this scale. The supercomputer, known as DeepSouth, is being developed by Western Sydney University. When it goes online next year, it will be capable of 228 trillion synaptic operations per second, which rivals the estimated rate of operations in the human brain.
Continue Reading.
82 notes
·
View notes
Text
simulation of schizophrenia
so i built a simulation of schizophrenia using rust and python
basically you have two groups of simulated neurons, one inhibitory and one excitatory. the excitatory group is connected so they will settle on one specific pattern. the inhibitory group is connected to the excitatory group semi-randomly. the excitatory group releases glutamate while the inhibitory group releases gaba. glutamate will cause the neurons to increase in voltage (or depolarize), gaba will cause the neurons to decrease in voltage (hyperpolarize).
heres a quick visualization of the results in manim
the y axis represents the average firing rate of the excitatory group over time, decay refers to how quickly glutamate is cleared from the neuronal synapse. there are two versions of the simulation, one where the excitatory group is presented with a cue, and one where it is not presented with a cue. when the cue is present, the excitatory group remembers the pattern and settles on it, represented by an increased firing rate. however, not every trial in the simulation leads to a memory recall, if the glutamate clearance happens too quickly, the memory is not maintained. on the other hand, when no cue is presented if glutamate clearance is too low, spontaneous activity overcomes inhibition and activity persists despite there being no input, ie a hallucination.
the simulation demonstrates the failure to maintain the state of the network, either failing to maintain the prescence of a cue or failing to maintain the absence of a cue. this is thought to be one possible explaination of certain schizophrenic symptoms from a computational neuroscience perspective
14 notes
·
View notes
Text
If you ask a doctor prescribing you psychotropic medications for a mental illness to explain their mechanism of action they won’t be able to give you a complete explanation, but they work by exciting or inhibited certain neurotransmitter receptors in your brain. Having certain neurotransmitter receptors inhibited or excited 24/7 by medicated or non-medicated mental illness causes you to lose your ability to reason enough to limit your choices of reactions to the environmental changes around you and to not be able to use your free will. You then make decisions based on the most attention grabbing advertisements and their relationship with your desires. Since your brain is an antenna and there are wireless signals all around you, you can eventually completely lose your ability to exercise free will and act or react based on certain wireless data packets in wireless signals surrounding you based on how they stimulate you. Since mental illness is the result of bias this causes more and more reactive and biased behavior. Processed and synthetic food ingredients also contain chemical compounds that dull your neurotransmitter receptors and can cause mental illness or make it worse. Tell me the brand of products you purchase the most and I’ll tell you which of your behaviors are being controlled by which corporations. The Chinese subvert the will of the people on the West Coast including Silicon Valley through corporations and the Russians subvert the will of the people on the parts of the East Coast and in the South through corporations there. They use money laundering through investment vehicles and cyber attacks with bot farms to manipulate the behaviors of corporations in order to subvert the will of the people, combined with the naivety of our politicians and CEOs or corporate boards on how technology works.
#matrix#mental illness#artificial intelligence#psychology#psychiatry#environment#neurotransmitter#neurology#neuroscience#computational psychology#computational neurology#health#food#synthetic
57 notes
·
View notes
Text
Cruise Robotaxi Crashes Into Fire Truck in San Francisco
Headlines This Week In a big win for human artists, a Washington D.C. judge has ruled that AI-generated art lacks copyright protections. Meta has released SeamlessM4T, an automated speech and text translator that works in dozens of languages. New research shows that content farms are using AI to rip off and repackage news articles from legacy media sites. We have an interview with one of the…

View On WordPress
#Applications of artificial intelligence#artificial intelligence#Barry Brown#chatgpt#Computational neuroscience#Cruise#Electric vehicles#Emerging technologies#Gizmodo#iPhone#Jack Brewster#Robotaxi#Robotics#Self-driving car#Stephen King#Vehicular automation#waymo
0 notes