Tumgik
rhidvvan · 2 years
Text
The Self Assembling Brain A Quest For Improved AI. 
What brought my attention to this is my interest in the history of Artificial Intelligence, trying to understand how we could put an intelligent brain together or how an intelligent brain is formed. This is really goes back to what you can call the neural theory.
This theory suggests that neurons are individual connected cells, tissue is composed of individual cells, which are genetic, anatomic, functional and trophic units. I will base my discussion on Information gathered from this podcast as well as other relating information.
What lead to the development of the theory: 
Back in the early 20th centaury  pioneers like Camillo Golgi  and Santiago Ramon Cajal amongst others who were  known for there work on central nervous system and were both awarded a noble price in 1906, they had various views on the state in which neurons exist. Some said whether it could be that we have individual neurons that are a physiological unit that have to find each other to make proper connectivity or whether the neural network that makes up the brain comes prefabricated.  This in general is where the idea of the neural theory comes from, in a nutshell it means that it is how individual neurons decides whom to make contact with (other neurons).
How do neurons find thereself :
There exist in our brain 86 billions neurons based on the recent estimate of the full human brain, how  they find each other to wire up a proper intelligent neural network. This question remains unanswered, trying to understand how genetic code and code development of individual neurons, growing cables (growing what is called axons and dendrite) that wire up to make synaptic connection and what comes out of this.
Information Problem:
How do individual neurons connect and produce intelligence, how do they get information that makes the brain intelligent. Information problem in short is the question of how much information can you get out from the genom to wire up a brain and to make something that has intelligent properties and also how much information can you get into a network ones it is wired up through learning. Though this neural networks are already smart before they get to learn anything. 
This is an historical reasoning about how information comes about, to know what makes neural network intelligent we must first determine if it is connectivity or learning. Information is either from the rules, genes or the network wired through getting more information from the environment, in most cases both play a role. When neurons are genetically encoded the connectivity of the brain can be called nature and the learning that comes from the environment are called nurture.
A good example of the genetically encoded neural network is the monarch butterflies life cycle, where they fly and migrate for like 3000 miles journey from the west to east coast somewhere on trees in Mexico and later on dies, this cycle then repeat itself from generation to generation without no learning process, it has to be that this information naturally exist in there genes. 
Most monarch butterflies live for 5 weeks, except for the generation born at the end of summer. These butterflies will live up to 8 months as they fly back to their wintering grounds in Central and South America, where they stay until the following spring.
Another equally good example with regards to learning is the waggle dance done by bees to convey in formation to other bees about location of food by a dance, who then learn from the dance and decode what they have learnt. 
About the Researcher:
Peter Robin Hiesinger is professor of neurobiology at Freie Universität Berlin, where he teaches undergraduate and graduate students and leads a research laboratory and a multilab research consortium on neural networks. Robin did his undergraduate and graduate studies in genetics, computational biology and philosophy at the University of Freiburg in Germany. He then did his postdoc at Baylor College of Medicine in Houston and was Assistant Professor and Associate Professor with tenure for more than 8 years at UT Southwestern Medical Center in Dallas. Some of his notable lab works are "synapse specification through relative partner availability", "How come all neuronal Rob GTpases are viable" and "Robust Circuit"
2 notes · View notes
rhidvvan · 2 years
Text
Artificail Intelligence: A guide to thinking human 
This write up in cornered around the topic "Artificial Intelligence: A guide to human thinking" A book written by an Americans scientist Professor Melanie Mitchell, she is the Davis Professor of complexity at the Santa Fe Institute New Mexico. Her major works has been in areas of Analogical reasoning, complex system, genetic algorithm, cellular automata and visual recognition.
She received her PhD in 1990 from the University of Michigan  under Douglas Hofstadter and John Holland, for which she developed the copycat cognitive architecture. She is the author of "Analogy-Making as Perception", essentially a book about Copycat. She has also critiqued Stephen Wolfram "A new kind of Science"  and showed that genetic algorithms could find better solutions to the majority problems  for one-dimensional cellular automata. She is the author of An Introduction to Genetic Algorithms, a widely known introductory book published by MIT press in 1996. She is also author of Complexity: A Guided Tour (Oxford University Press, 2009), which won the 2010 Phi Beta Kappa Science Book Award and Artificial Intelligence: A Guide for Thinking Humans . https://en.wikipedia.org/wiki/Melanie_Mitchell
History of Artificial Intelligence:
Going back in Time to the 1950's when the perceptron was developed, 1958 being the year it was made public, press release and all sort. It was one of many attempt to automate intelligence by using inspiration from the brain, it was developed by a psychologist named Frank Rosenblatt, he tried to simulate in a very idealised way how neurone work and a simple network of a neurone, how they might go about recognising some perceptual input. Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron).
The famous conference amongst Artificial Intelligence pioneer held at Dartmoor College in 1956 gathering a 11 attendees namely Marvin Miskey, Julian Bigelow, D.M Mackay, Ray Solomonoff, John Holland, Joh McCarthy, Cluade Shannon, Allen Newell, Herbert Simon, Oliver Selfridge and Nathanial Rochester, there goal was to clarify and develop a thinking machine and also make progress in area like; computer vision, natural language understanding, solving of mathematical problems, driving car and most things humans do.
Definition of Artificial intelligence:
AI is a branch that involves many computational methods, ways of getting machine to do things that we consider to be intelligent. However this is not possible because the term intelligence doesn't have a fixed definition, it keeps changing over time as people keep evolving. That being said, can a AI system behave like human in all circumstances in term of general purpose other than being specified, can this system use common sense.
In specific areas such as speech recognition, AI has been successful but when we look at it, the system only perform that specific task, they can't do anything else and they don't in any sense understand the text they transcribe.
From this point of view in terms of general AI, I will say we are still far away from achieving the goal of machine thinking like humans.
Branches/Evolving of AI:
The perceptron was an early effort of Machine learning but as things evolved people came up with ideas that machine should not just focus on learning but on humans (experts in various field) trying to program in knowledge and rules that programs will use to operate, which brought about;
Expert systems which gain popularity in the 1970s and 1980s, where programmers will interview experts and will try to extract knowledge and rules, then try to program this program into computers, but the approach wasn't as successful as imagined because most of the knowledge experts used was hard to extract from them because most Information are not used by this experts consciously.
Then come in 1980 to 1990 the approach of Statistical learning which tries to unite machine learning with inference from data which turns out to be more successful and is till in use till date as compared to expert system approach.
Deep learning being one of the most effective tool out there, you can think of perceptron as kind of what neural network originally evolved from, however this neural network perform very narrowly defined task but when there is minor changes in data the system produces false result therefore leading us back to AI lacking intelligence in terms of generalisation.
Solution for AI in terms of understanding/intelligence
Use of Analogy can be used to help AI system develop understanding of what they are doing and how they should be doing things. The ability to see abstract similarities is a fundamental to being Intelligent and understand the real world.
The idea of speculation where we can predict what is likely to happen either consciously or unconsciously because of similar experiences and sort of learn from what has or hasn't happened to us.
AI in terms of different approach
A more recent break through in AI is in the field of protein folding and drug design, they were able to use a AI system to look at protein sequence of amino acid and predict how they were going to fold up in 3 dimension. Looking at AI from this point of view one will see that if system works with a human driven approach rather than trying things differently the result of such achievements will be impossible. So there are different approach to AI which are correct for different application of AI
0 notes
rhidvvan · 2 years
Text
History of Information
I chose to discuss this topic because of it relevance in various concept of life, everybody in the world has and uses information regardless of what it is about or how true or false the information may be, therefore Information is in it self a way of life.
I have listened to some of the topics from the podcast but the one about Information seems to capture my interest and the declaration made by a media scholar Mashall McLuhan captured my interest saying "we live ind the age of Information". A similar view of such was raised in the 18th century contrary to this view, it says "every age is an age of Information"
Going back to McLuhan, he was around the time talking about electricity but specifically about television and how it will transform the society and he uses the phrase "recreate the global village". That term made the notion of information widely spread and was taken up by professionals.
Information was seen as a distinguishing factor of our age, but the argument is whether it ask us to separate our self from history, making the past irrelevant, which brings about how we understand information, if we need to look around or not look around.
We have some people that are of the view that we don't need history of information, information is looking in the future which is an historical claim itself.
there are view also that information has always been there, Information proceeded society and was there from the Big Bang.
The relationship between information and history seems to be an intriguing concept, we throw Information around without even asking ourselves what its there for, why does Information not seem to be a valuable concept even though we are unaware that we perform activities and practices that can be described in terms of Information and we see that it can be traced across time to help us see aspect of how it has shaped the society.
Talking about how we control information and in turn Information taking control over us, we can be fairly sure that Information is starting to shape our want, what we might buy, where we might go to buy it, our likes and dislikes.
If we have Information about people, step by step we start manipulating them to our advantage, as history shows people who have Information us them to control there decision of people they are informed about.
The gathering of Information without using it to influence is a very difficult challenge and one need to be aware of this when making decisions.
An interesting and challenging aspect of information is that everybody uses it, Information combines us though we have different notion about Information.
Our present day understanding of information mostly comes from technology, we have come to centralise the idea of Information around computers even though the idea behind Computers was to calculate.
This discussion has given me a brother view on how I see Information, it has given me some thought on the history of Information, how Information control us and the idea that not all information is true, when looking at Information we need to also consider the motive of the information, Information is this age is commodity.
This write up is based on the discussion between Prof. Waseem Akhtar and Prof Paul Duguid with focus on the book "Information: A historical companion". https://www.bridgingthegaps.ie/tag/information/
The researcher Paul Duguid is a professor in the School of Information at the University of California, Berkeley and one of the editors of this book.
His current research Information and authenticity.  From a historical perspective: looks into how we come to trust information we encounter and how we led credibility to information we share and what role brands and other forms of certification play.
other research include concept of information, the social life of Information, history of Information, Network and knowledge, human computer interaction, industry standard, information society, management learning.
His educational background are as follows; Information,BA, English & Philosophy, Bristol University, UK, 1976 MA, English Literature, Washington University, St. Louis, 1980 https://www.ischool.berkeley.edu/people/paul-duguid
He career is as follows; Granville Publishing, London, England, senior editor, 1981-87; Institute for Research on Learning, Palo Alto, CA, research scientist, 1987-90; University of California, Berkeley, research specialist in social and cultural studies in education, 1992-2004, adjunct professor in School of Information, 2005—. Xerox Corporation, consultant, 1988-2001; Copenhagen Business School, visiting professor in organizational and industrial sociology, 2002-03; Santa Clara University, visiting fellow at the Center for Science, Technology, and Society, 2005-06; Queen Mary, University of London, professorial research fellow, 2005—. https://www.encyclopedia.com/arts/educational-magazines/duguid-paul-1954
1 note · View note