#AI whatisartificialintelligence  whatisartificialintelligence
Explore tagged Tumblr posts
viswatech · 3 years ago
Text
ARTIFICIAL INTELLIGENCE
A Brief History of Artificial Intelligence
Intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle's development of syllogism and its use of deductive reasoning was a key moment in mankind's quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI. 
1940s
·         (1943) Warren McCullough and Walter Pitts publish "A Logical Calculus of Ideas Immanent in Nervous Activity." The paper proposed the first mathematical model for building a neural network. 
·         (1949) In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they're used. Hebbian learning continues to be an important model in AI.
1950s
·         (1950) Alan Turing publishes "Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent. 
·         (1950) Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.
·         (1950) Claude Shannon publishes the paper "Programming a Computer for Playing Chess."
·         (1950) Isaac Asimov publishes the "Three Laws of Robotics."  
·         (1952) Arthur Samuel develops a self-learning program to play checkers. 
·         (1954) The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English. 
·         (1956) The phrase artificial intelligence is coined at the "Dartmouth Summer Research Project on Artificial Intelligence." Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today. 
·         (1956) Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program. 
·         (1958) John McCarthy develops the AI programming language Lisp and publishes the paper "Programs with Common Sense." The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.  
·         (1959) Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving. 
·         (1959) Herbert Gelernter develops the Geometry Theorem Prover program.
·         (1959) Arthur Samuel coins the term machine learning while at IBM.
·         (1959) John McCarthy and Marvin Minsky founded the MIT Artificial Intelligence Project.
1960s
·         (1963) John McCarthy starts the AI Lab at Stanford.
·         (1966) The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects. 
·         (1969) The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.
1970s
·         (1972) The logic programming language PROLOG is created.
·         (1973) The "Lighthill Report," detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects. 
·         (1974-1980) Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year's "Lighthill Report," artificial intelligence funding dries up and research stalls. This period is known as the "First AI Winter." 
1980s
·         (1980) Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first "AI Winter."
·         (1982) Japan's Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.
·         (1983) In response to Japan's FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence. 
·         (1985) Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp. 
·         (1987-1993) As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the "Second AI Winter." During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.
1990s
·         (1991) U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.
·         (1992) Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.
·         (1993) DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations. 
·         (1997) IBM's Deep Blue beats world chess champion Gary Kasparov
2000s
·         (2005) STANLEY, a self-driving car, wins the DARPA Grand Challenge.
·         (2005) The U.S. military begins investing in autonomous robots like Boston Dynamics' "Big Dog" and iRobot's "PackBot."
·         (2008) Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app. 
2010-2014
·         (2011) IBM's Watson trounces the competition on Jeopardy!. 
·         (2011) Apple releases Siri, an AI-powered virtual assistant through its iOS operating system. 
·         (2012) Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in the breakthrough era for neural networks and deep learning funding.
·         (2014) Google makes the first self-driving car to pass a state driving test. 
·         (2014) Amazon's Alexa, a virtual home is released 
2015-2021
·         (2016) Google DeepMind's AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.
·         (2016) The first "robot citizen", a humanoid robot named Sophia, is created by Hanson Robotics and is capable of facial recognition, verbal communication and facial expression.
·         (2018) Google releases natural language processing engine BERT, reducing barriers in translation and understanding by machine learning applications.
·         (2018) Waymo launches its Waymo One service, allowing users throughout the Phoenix metropolitan area to request a pick-up from one of the company's self-driving vehicles.
·         (2020) Baidu releases its LinearFold AI algorithm to scientific and medical teams working to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.
How Does Artificial Intelligence Work?
Read More
  AI Approaches and Concepts
Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?" 
Turing's paper "Computing Machinery and Intelligence" (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   
At its core, AI is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.
The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  
 The major limitation in defining AI as simply "building machines that are intelligent" is that it doesn't actually explain what artificial intelligence is? What makes a machine intelligent? AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry. 
In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is "the study of agents that receive percepts from the environment and perform actions." (Russel and Norvig viii)
 Norvig and Russell go on to explore four different approaches that have historically defined the field of AI: 
1.     Thinking humanly
2.     Thinking rationally
3.     Acting humanly 
4.     Acting rationally
The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting "all the skills needed for the Turing Test also allow an agent to act rationally." (Russel and Norvig 4).
Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  "algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together."
While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence. 
 Read More
The Four Types of Artificial Intelligence
 Reactive Machines
A reactive machine follows the most basic of AI principles and, as its name implies, is capable of only using its intelligence to perceive and react to the world in front of it. A reactive machine cannot store a memory and as a result cannot rely on past experiences to inform decision making in real-time.
Perceiving the world directly means that reactive machines are designed to complete only a limited number of specialized duties. Intentionally narrowing a reactive machine’s worldview is not any sort of cost-cutting measure, however, and instead means that this type of AI will be more trustworthy and reliable — it will react the same way to the same stimuli every time. 
A famous example of a reactive machine is Deep Blue, which was designed by IBM in the 1990’s as a chess-playing supercomputer and defeated international grandmaster Gary Kasparov in a game. Deep Blue was only capable of identifying the pieces on a chess board and knowing how each moves based on the rules of chess, acknowledging each piece’s present position, and determining what the most logical move would be at that moment. The computer was not pursuing future potential moves by its opponent or trying to put its own pieces in better position. Every turn was viewed as its own reality, separate from any other movement that was made beforehand.
Another example of a game-playing reactive machine is Google’s AlphaGo. AlphaGo is also incapable of evaluating future moves but relies on its own neural network to evaluate developments of the present game, giving it an edge over Deep Blue in a more complex game. AlphaGo also bested world-class competitors of the game, defeating champion Go player Lee Sedol in 2016.
Though limited in scope and not easily altered, reactive machine artificial intelligence can attain a level of complexity, and offers reliability when created to fulfill repeatable tasks.
 Limited Memory
Limited memory artificial intelligence has the ability to store previous data and predictions when gathering information and weighing potential decisions — essentially looking into the past for clues on what may come next. Limited memory artificial intelligence is more complex and presents greater possibilities than reactive machines.
Limited memory AI is created when a team continuously trains a model in how to analyze and utilize new data or an AI environment is built so models can be automatically trained and renewed. When utilizing limited memory AI in machine learning, six steps must be followed: Training data must be created, the machine learning model must be created, the model must be able to make predictions, the model must be able to receive human or environmental feedback, that feedback must be stored as data, and these these steps must be reiterated as a cycle.
There are three major machine learning models that utilize limited memory artificial intelligence:
·         Reinforcement learning, which learns to make better predictions through repeated trial-and-error.
·         Long Short Term Memory (LSTM), which utilizes past data to help predict the next item in a sequence.  LTSMs view more recent information as most important when making predictions and discounts data from further in the past, though still utilizing it to form conclusions
·         Evolutionary Generative Adversarial Networks (E-GAN), which evolves over time, growing to explore slightly modified paths based off of previous experiences with every new decision. This model is constantly in pursuit of a better path and utilizes simulations and statistics, or chance, to predict outcomes throughout its evolutionary mutation cycle.
 Theory of Mind
Theory of Mind is just that — theoretical. We have not yet achieved the technological and scientific capabilities necessary to reach this next level of artificial intelligence. 
The concept is based on the psychological premise of understanding that other living things have thoughts and emotions that affect the behavior of one’s self. In terms of AI machines, this would mean that AI could comprehend how humans, animals and other machines feel and make decisions through self-reflection and determination, and then will utilize that information to make decisions of their own. Essentially, machines would have to be able to grasp and process the concept of “mind,” the fluctuations of emotions in decision making and a litany of other psychological concepts in real time, creating a two-way relationship between people and artificial intelligence.
Self-awareness
Once Theory of Mind can be established in artificial intelligence, sometime well into the future, the final step will be for AI to become self-aware. This kind of artificial intelligence possesses human-level consciousness and understands its own existence in the world, as well as the presence and emotional state of others. It would be able to understand what others may need based on not just what they communicate to them but how they communicate it. 
Self-awareness in artificial intelligence relies both on human researchers understanding the premise of consciousness and then learning how to replicate that so it can be built into machines.
 How is AI Used? 
Read More
While addressing a crowd at the Japan AI Experience in 2017,  DataRobot CEO Jeremy Achin began his speech by offering the following definition of how AI is used today:
"AI is a computer system able to perform tasks that ordinarily require human intelligence... Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules."
Artificial intelligence generally falls under two broad categories: 
·         Narrow AI: Sometimes referred to as "Weak AI," this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.   
·         Artificial General Intelligence (AGI): AGI, sometimes referred to as "Strong AI," is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem.
1 note · View note
nidhibansalji · 4 years ago
Photo
Tumblr media
What is artificial intelligence and how is it used In 2021 .
Artificial intelligence has dramatically changed the business landscape. What began as AI-based automation is now competent enough  of imitating human interaction. It is not just the human-like capabilities that make artificial intelligence unique. An advanced AI algorithm offers far better speed and reliability at a much lower cost as compared to its human counterparts.
0 notes
akshay-09 · 5 years ago
Link
In this Artificial Intelligence tutorial for beginners video you will learn all the major basic concepts in Artificial Intelligence like what is ai, Future of AI and the career in AI
0 notes
aibridgeml-blog · 5 years ago
Link
Check out this video for Brief information about Artificial Intelligence & Their Benefits. Visit: http://bit.ly/2lw5pDO for AI Services & Solutions
0 notes
phungthaihy · 5 years ago
Photo
Tumblr media
AI vs Machine Learning vs Deep Learning | AI vs ML vs DL | Intellipaat http://ehelpdesk.tk/wp-content/uploads/2020/02/logo-header.png [ad_1] Intellipaat Artificial Intellige... #aivsmachinelearningvsdeeplearning #aivsmlvsdl #androiddevelopment #angular #artificialintelligencevsmachinelearning #artificialintelligencevsmachinelearningvsdeeplearning #c #css #dataanalysis #datascience #deeplearning #deeplearningvsmachinelearning #development #docker #edureka #intellipaat #iosdevelopment #java #javascript #machinelearning #machinelearningvsai #machinelearningvsartificialintelligence #machinelearningvsdeeplearning #machinelearningvsdeeplearningvsartificialintelligence #node.js #python #react #simplilearn #unity #webdevelopment #whatisartificialintelligence #whatisdeeplearning #whatismachinelearning #ytccon
0 notes
logintocourses · 5 years ago
Video
youtube
What is Artificial Intelligence | AI in 5 Minutes | AI Tutorial
0 notes
gyanwalebaba-blog · 7 years ago
Text
Artificial Intelligence in this Advanced World
Tumblr media
Artificial intelligence is on the rise both in the business world as well as in the real world. We are more fortunate to live in the era of technological advancements. Twenty years back, all the functions were carried out by humans and works were implemented manually. Everything has changed now, and machines have taken over the humans in most of the forms of life. With regard to automation, artificial intelligence has a special part to play with. Our lives have changed through AI, and in fact, it has become an important part of our daily lives. But what exactly is artificial intelligence?
Tumblr media
source
What is Artificial Intelligence – An Overview-
The concept of artificial intelligence has changed over a time. However, the core still remains the same. Artificial intelligence is the stimulation of human intelligence in machines. It is a well-known fact that humans are the most intelligent creatures on the earth. Apparently, the goal of AI is to function individually and independently similar to humans. They can dramatically improve technology in our homes and workplaces. For instance, the current smart speaker Alexa communicate with the humans through voice recognition and respond to your queries. However, the future Alexa through AI will alert you about your forgotten keys or wallet before you leave the house. It can make Alexa think independently and act intelligently. The introduction of AI brings the idea of the error-free world, and it is a great help for humans. Perhaps, the most exciting field in robotics. Everyone aware that robot works as per commands and are not intelligent. Apparently, with loads of progress in artificial intelligence, roboticists are trying to achieve this milestone effort. As a result of this, robots were devised. Kismet, a humanoid robot can recognize human body language and voice inflection and respond accurately. There are several kinds of researches conducted in the field of mobility mechanisms, estimation theory, language interfaces to devise a relationship between human and machine. This led to the invention of the first robot CIMON, designed based on artificial intelligence to fly to space. This is the first AI robot, flying to space since no AI equipment has flown to space so far. While much of this technology is fairly fundamental at present, yet we can expect significant changes in the future. You can hear constant news about yet another AI machine overcoming unprecedented hurdles and outperforming humans. Some of the ways it will impact our lives are Automated Transportation Cyborg Technology Saving us from natural disasters Human implants for the betterment of their lives As a good friend and a caring member for adults   Applications of Artificial Intelligence - You might be aware that some of the applications that you already use like the Netflix, Spotify and Siri work on the concept of artificial intelligence. The human-like capabilities make them unique. Also, an advanced algorithm makes them work error free with high accuracy and higher speed. AI uses programming languages to bring maximum benefits to people's lives. Some of its achievement so far is auto-pilots for self-driving cars and other development projects. Some of the top applications of artificial intelligence are- Transportation – When people think about self-driving cars, the immediate picture that comes into our minds is the automatic vehicle with no drivers involved. The idea of a driverless vehicle rolling around the streets is really incredible. Although the early version of automated cars had automated cruise control, now the AI is trying to make everything inside the car fully-automated with complete safety to the passengers. AI in Healthcare – AI, and Robotics are redrawing healthcare landscapes. One of the main areas that AI has the most impact in the field of healthcare is the AI-assisted robotic surgery. These robots will have the ability to suggest new surgical techniques based on surgical experiences.  A medical survey reveals that AI-assisted robotic surgery resulted in reduced surgical complications compared to the surgery performed without the AI aid.
Tumblr media
source AI and CyberSecurity – Security systems of the past used simple sensors and alarms to detect the threat, however, AI-powered security systems provide a higher level of security and most reliable. The newest generation of a self-monitored security system is a mixture of motion detectors, sensors and a security camera that works with the concept of AI technology that can detect potential breaks and other emergencies.
Tumblr media
source Machine Intelligence Vs Artificial Intelligence - Artificial intelligence sometimes called machine intelligence, when the characteristics are attributed by machines rather than human and animal thinking and displaying intelligence. Some of the typical difference between them is AI requires human-level intelligence to perform tasks whereas machine intelligence works independently based on programmed data. Machine learning is the most successful approach to AI that is based on neural networks.  In fact, companies like Google and Facebook are using machine learning to optimize advertising and speed up the search. When machine learning interacts with the humans in a most convincing way, artificial intelligence comes into play. AI cannot exist without machine learning however it has the ability to respond to human questions that makes it unique.   Can We Create Artificial Intelligence? It is not recommended to create an artificial intelligence with the current approach followed now. Although, AI has the potential to replace humans at all jobs and save from the slavery of manual labour, yet it can shake many societal foundations. Moreover, the human brain is composed of millions of neurons that are responsible for human intelligence, so creating a machine that could perform the functionalities of millions of neurons is quite impossible, and it is not an easy piece of cake to create AI. AI has a consistent success since 1950 and one of the highest demanding fields. Organizationsprefergraduates having masters in AI because of their training both in computers and social parts. Studying in the best schools like Carnegie Mellon University, Massachusetts Institute of Technology can make yourself lead in the revolution. You can also do self-study with the aid of the best books on artificial intelligence or do courses like CS188 to get ideas and techniques underlying the design of the intelligent computer systems. Read the full article
0 notes