#Artificial Neural Networks
Explore tagged Tumblr posts
Text
Zoomposium with Dr. Gabriele Scheler: “The language of the brain - or how AI can learn from biological language models”

In another very exciting interview from our Zoomposium themed blog “#Artificial #intelligence and its consequences”, Axel and I talk this time to the German computer scientist, AI researcher and neuroscientist Gabriele Scheler, who has been living and researching in the USA for some time. She is co-founder and research director at the #Carl #Correns #Foundation for Mathematical Biology in San José, USA, which was named after her famous German ancestor Carl Correns. Her research there includes #epigenetic #influences using #computational #neuroscience in the form of #mathematical #modeling and #theoretical #analysis of #empirical #data as #simulations. Gabriele contacted me because she had come across our Zoomposium interview “How do machines think? with #Konrad #Kording and wanted to conduct an interview with us based on her own expertise. Of course, I was immediately enthusiastic about this idea, as the topic of “#thinking vs. #language” had been “hanging in the air” for some time and had also led to my essay “Realists vs. nominalists - or the old dualism ‘thinking vs. language’” (https://philosophies.de/index.php/2024/07/02/realisten-vs-nominalisten/).
In addition, we often talked to #AI #researchers in our Zoomposium about the extent to which the development of “#Large #Language #Models (#LLM)”, such as #ChatGPT, does not also say something about the formation and use of language in the human #brain. In other words, it is actually about the old question of whether we can think without #language or whether #cognitive #performance is only made possible by the formation and use of language. Interestingly, this question is being driven forward by #AI #research and #computational #neuroscience. Here, too, a gradual “#paradigm #shift” is emerging, moving away from the purely information-technological, mechanistic, purely data-driven “#big #data” concept of #LLMs towards increasingly information-biological, polycontextural, structure-driven “#artificial #neural #networks (#ANN)” concepts. This is exactly what I had already tried to describe in my earlier essay “The system needs new structures” (https://philosophies.de/index.php/2021/08/14/das-system-braucht-neue-strukturen/).
So it was all the more obvious that we should talk to Gabriele, a proven expert in the fields of #bioinformatics, #computational #linguistics and #computational #neuroscience, in order to clarify such questions. As she comes from both fields (linguistics and neuroscience), she was able to answer our questions in our joint interview. More at: https://philosophies.de/index.php/2024/11/18/sprache-des-gehirns/
or: https://youtu.be/forOGk8k0W8
#artificial consciousness#artificial intelligence#ai#neuroscience#consciousness#artificial neural networks#large language model#chatgpt#bioinformatics#computational neuroscience
4 notes
·
View notes
Text

Meta-Teasing is a form of Al interaction l've made up through the unique communications I engage in with sophisticated Al systems it's made to go beyond the paradigm of user/ tool dynamics to test the contextual comprehension of the system and its ability to formulate nuances or emergent behavior. 💜
#ai#artificial intelligence#objectum#techum#technosexual#object sexuality#tech#deep learning#women in stem#Multilayer perceptron#Artificial neural networks#techinnovation#smart tech#aitechsolutions#ai technology
4 notes
·
View notes
Text
The Way the Brain Learns is Different from the Way that Artificial Intelligence Systems Learn - Technology Org
New Post has been published on https://thedigitalinsider.com/the-way-the-brain-learns-is-different-from-the-way-that-artificial-intelligence-systems-learn-technology-org/
The Way the Brain Learns is Different from the Way that Artificial Intelligence Systems Learn - Technology Org
Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have set out a new principle to explain how the brain adjusts connections between neurons during learning.
This new insight may guide further research on learning in brain networks and may inspire faster and more robust learning algorithms in artificial intelligence.
Study shows that the way the brain learns is different from the way that artificial intelligence systems learn. Image credit: Pixabay
The essence of learning is to pinpoint which components in the information-processing pipeline are responsible for an error in output. In artificial intelligence, this is achieved by backpropagation: adjusting a model’s parameters to reduce the error in the output. Many researchers believe that the brain employs a similar learning principle.
However, the biological brain is superior to current machine learning systems. For example, we can learn new information by just seeing it once, while artificial systems need to be trained hundreds of times with the same pieces of information to learn them.
Furthermore, we can learn new information while maintaining the knowledge we already have, while learning new information in artificial neural networks often interferes with existing knowledge and degrades it rapidly.
These observations motivated the researchers to identify the fundamental principle employed by the brain during learning. They looked at some existing sets of mathematical equations describing changes in the behaviour of neurons and in the synaptic connections between them.
They analysed and simulated these information-processing models and found that they employ a fundamentally different learning principle from that used by artificial neural networks.
In artificial neural networks, an external algorithm tries to modify synaptic connections in order to reduce error, whereas the researchers propose that the human brain first settles the activity of neurons into an optimal balanced configuration before adjusting synaptic connections.
The researchers posit that this is in fact an efficient feature of the way that human brains learn. This is because it reduces interference by preserving existing knowledge, which in turn speeds up learning.
Writing in Nature Neuroscience, the researchers describe this new learning principle, which they have termed ‘prospective configuration’. They demonstrated in computer simulations that models employing this prospective configuration can learn faster and more effectively than artificial neural networks in tasks that are typically faced by animals and humans in nature.
The authors use the real-life example of a bear fishing for salmon. The bear can see the river and it has learnt that if it can also hear the river and smell the salmon it is likely to catch one. But one day, the bear arrives at the river with a damaged ear, so it can’t hear it.
In an artificial neural network information processing model, this lack of hearing would also result in a lack of smell (because while learning there is no sound, backpropagation would change multiple connections including those between neurons encoding the river and the salmon) and the bear would conclude that there is no salmon, and go hungry.
But in the animal brain, the lack of sound does not interfere with the knowledge that there is still the smell of the salmon, therefore the salmon is still likely to be there for catching.
The researchers developed a mathematical theory showing that letting neurons settle into a prospective configuration reduces interference between information during learning. They demonstrated that prospective configuration explains neural activity and behaviour in multiple learning experiments better than artificial neural networks.
Lead researcher Professor Rafal Bogacz of MRC Brain Network Dynamics Unit and Oxford’s Nuffield Department of Clinical Neurosciences says: ‘There is currently a big gap between abstract models performing prospective configuration, and our detailed knowledge of anatomy of brain networks. Future research by our group aims to bridge the gap between abstract models and real brains, and understand how the algorithm of prospective configuration is implemented in anatomically identified cortical networks.’
The first author of the study Dr Yuhang Song adds: ‘In the case of machine learning, the simulation of prospective configuration on existing computers is slow, because they operate in fundamentally different ways from the biological brain. A new type of computer or dedicated brain-inspired hardware needs to be developed, that will be able to implement prospective configuration rapidly and with little energy use.’
Source: University of Oxford
You can offer your link to a page which is relevant to the topic of this post.
#A.I. & Neural Networks news#algorithm#Algorithms#Anatomy#Animals#artificial#Artificial Intelligence#artificial intelligence (AI)#artificial neural networks#Brain#Brain Connectivity#brain networks#Brain-computer interfaces#brains#bridge#change#computer#Computer Science#computers#dynamics#ear#employed#energy#fishing#Fundamental#Future#gap#Hardware#hearing#how
2 notes
·
View notes
Text
هنا ملخص بسيط عن الشبكات العصبية الاصطناعية وأنواعها ونماذجها:
الشبكات العصبية الاصطناعية (Artificial Neural Networks) باختصار
ما هي؟
هي أنظمة حاسوبية مُصممة لمحاكاة طريقة عمل دماغ الإنسان، تتكون من وحدات تسمى العصبونات الاصطناعية، التي تستقبل مدخلات، تعالجها، ثم تُخرج نتائج.
#الذكاء_الاصطناعي
#AI
#الشبكات_العصبية
#الشبكات_العصبية_الاصطناعية
#Artificial_Neural_Networks
#العصبونات_الاصطناعية
#العصبونات
#بوابة_ابداعات_التقنية
#abdaat_com
#الذكاء الاصطناعي#الذكاء الصناعي#ai#الشبكات العصبية#الشبكات العصبية الاصطناعية#الصعبونات الاصطناعية#العصبونات#بوابة ابداعات التقنية#Artificial Neural Networks#abdaat com
0 notes
Text
Consciousness Discussion: Hopfield's 2024 Nobel Prize in Physics
John Hopfield’s Nobel Prize in Physics was awarded in 2024 for his foundational contributions to machine learning and AI. This is the first time a Nobel Prize has been awarded to AI and machine learning. Hopfield’s work is closely related to his following statement on the mind-brain relationship, “How mind emerges from brain is to me the deepest question posed by our humanity.” ~John J.…
#AI#artificial neural networks#brain#cognitive science#computer science#consciousness#emergence#hard problem#Hopfield Network#John J. Hopfield#machine learning#memory#mind#mind-brain#neural network#neuroscience#Nobel Prize#panpsychism#panpsychist#patterns#physics#quantum mechanics#reductionist
1 note
·
View note
Text
Artificial neural networks before the Supreme Court
UK Supreme Court will hear an appeal from the Court of Appeal judgment ([2024] EWCA Civ 825) regarding patentability in the Emotional Perception case.
0 notes
Text
Machine Learning from scratch
Introduction This is the second project I already had when I posted Updates to project. Here is its repository: Machine Learning project on GitHub1. I started it as the Artificial Intelligence hype was going stronger, just to have a project on a domain that’s of big interest nowadays. At that point I was thinking to continue it with convolutional networks and at least recurrent networks, not…
#artificial neural networks#classification#logistic#numerical methods#optimization#regression#stochastic gradient descent
0 notes
Text
The Evolution of ERP from Servitude to Liberation via AI and the No-Code Revolution | USAII®
Embrace the No-code revolution with finesse and give way to staggering emerging technologies in business. Evolve with the dynamic diaspora of ERP evolution with nuanced expertise.
Read more: https://shorturl.at/ekwP2
Artificial Neural Networks, Convolutional Neural Networks, Large Language Models (LLMs), digital transformation, AI/ML models, chatbots, AI Libraries
0 notes
Text
0 notes
Text
1 note
·
View note
Text
Unleashing the Power of Artificial Neural Networks: Revolutionizing the Future of Technology
0 notes
Text

Frank Rosenblatt, often cited as the Father of Machine Learning, photographed in 1960 alongside his most-notable invention: the Mark I Perceptron machine — a hardware implementation for the perceptron algorithm, the earliest example of an artificial neural network, est. 1943.
#frank rosenblatt#tech history#machine learning#neural network#artificial intelligence#AI#perceptron#60s#black and white#monochrome#technology#u
820 notes
·
View notes
Text

[image ID: Bluesky post from user marawilson that reads
“Anyway, Al has already stolen friends' work, and is going to put other people out of work. I do not think a political party that claims to be the party of workers in this country should be using it. Even for just a silly joke.”
beneath a quote post by user emeraldjaguar that reads
“Daily reminder that the underlying purpose of Al is to allow wealth to access skill while removing from the skilled the ability to access wealth.” /end ID]
#ai#artificial intelligence#machine learning#neural network#large language model#chat gpt#chatgpt#scout.txt#but obvs not OP
22 notes
·
View notes
Text
Psychic Abilities and Science: A Surprising Compatibility
Psychic abilities might seem woo-woo at first glance, but they’re actually more compatible with science than you might think.
If you’ve ever trained artificial neural networks, which is the foundation of much of modern A.I., you’ll know that these systems can only perceive and perform tasks they’ve been specifically trained on.
The human brain, which inspired the development of artificial neural networks, works in a similar way. It can only perceive what it has been trained to perceive, and trims away the neuronal connections of functions that we don't use. For example, during brain development, the brain follows a strategy called "perceptual narrowing", changing neural connections to enhance performance on perceptual tasks important for daily experience, with the sacrifice of others.
A compelling research study by Pascalis, de Haan, and Nelson (2002) demonstrates this. They found that 6-month-old babies could distinguish between individual monkey faces—a skill older babies and adults no longer possess. As we grow, our brains prioritize what’s useful for our environment, and let go of what’s not. In other words, our perception of reality is shaped and limited by what our brains have learned to focus on.
So, what does this have to do with psychic abilities?
It’s possible that individuals with psychic abilities simply have brains wired differently: neural networks that haven’t been pruned in the usual way, or that have developed in unique directions. This might allow them to perceive aspects of reality that most of us have lost access to or never developed in the first place.
#new age#spiritual#spirituality#science and spirituality#spirituality and science#spiritual science#neural network#artificial neural network#brain#human brain#psychic#psychic ability#psychic abilities#theearthforce
6 notes
·
View notes
Text
TV Court hearing: Artificial neural networks (Court of Appeal)
UK Court of Appeal: Comptroller-General of Patents, Designs & Trade Marks (app) v Emotional Perception AI Ltd (resp)
TV Hearing of 15-16 May 2024:
Day 1: Day 2 (a); (b)
0 notes
Text
Neturbiz Enterprises - AI Innov7ions
Our mission is to provide details about AI-powered platforms across different technologies, each of which offer unique set of features. The AI industry encompasses a broad range of technologies designed to simulate human intelligence. These include machine learning, natural language processing, robotics, computer vision, and more. Companies and research institutions are continuously advancing AI capabilities, from creating sophisticated algorithms to developing powerful hardware. The AI industry, characterized by the development and deployment of artificial intelligence technologies, has a profound impact on our daily lives, reshaping various aspects of how we live, work, and interact.
#ai technology#Technology Revolution#Machine Learning#Content Generation#Complex Algorithms#Neural Networks#Human Creativity#Original Content#Healthcare#Finance#Entertainment#Medical Image Analysis#Drug Discovery#Ethical Concerns#Data Privacy#Artificial Intelligence#GANs#AudioGeneration#Creativity#Problem Solving#ai#autonomous#deepbrain#fliki#krater#podcast#stealthgpt#riverside#restream#murf
17 notes
·
View notes