Tumgik
#inference&8217;
sounmashnews · 2 years
Text
[ad_1] SALINAS, Calif. —  With a jury already deliberating whether to find Paul Flores guilty of the 1996 homicide of Cal Poly San Luis Obispo scholar Kristin Smart, a second jury on Wednesday started contemplating the destiny of his father, who's charged as an adjunct to the crime.Ruben Flores’ legal professional, Harold Mesick, advised jurors the proof exhibits 19-year-old Smart was “not happy” at Cal Poly and should be alive.Flores, 81, is being tried alongside along with his 45-year-old son. The case was moved by a San Luis Obispo County decide greater than 100 miles north to ensure fair legal proceedings. Two juries have heard proof concurrently over 11 weeks. One is deliberating Paul Flores’ destiny. The different heard closing arguments Wednesday within the case in opposition to Ruben Flores.“It is the intent to read the verdicts one after the other,” Monterey County Judge Jennifer O’Keefe stated of the separate costs.Deputy Dist. Atty. Chris Peuvrelle advised jurors that Ruben Flores helped his son conceal Smart’s stays under his deck in Arroyo Grande, Calif., for many years.Peuvrelle stated that when Paul Flores killed the faculty scholar in May 1996 inside his dorm, he made a name. “He knew the one person who would help with a dead girl on his bed was his father,” the prosecutor stated. “It was his version of a 911 call.”Peuvrelle stated Flores raped or tried to rape and ultimately killed Smart earlier than hiding her stays with the assistance of his father. Although Smart’s physique has by no means been discovered, she was declared legally useless in 2002. Her physique, Peuvrelle stated, was the important thing proof of against the law. “Ruben Flores has been helping him for the last 26 years,” the prosecutor stated.But Mesick countered that Smart continues to be lacking.“The state has done a great job of demonizing Paul Flores and my client,” the protection legal professional stated in his closing remarks. But “my client is absolutely innocent. He has not dug a grave in his life. I think this case screams reasonable doubt.” Mesick referred to as the prosecution’s concept that Ruben Flores helped his then-19-year-old son transfer a physique from his dormitory room and bury it beneath his deck — the place the remains were allegedly removed in 2020 — “ludicrous.” “What distinguishes this case from most cases is the lack of physical evidence,” Mesick stated, boldly asking jurors to return a fast verdict to ship a message to prosecutors. “There are no bones, no teeth, no body parts. Kristin Smart may be just missing,” he stated. “She was not happy at Cal Poly. It is reasonable to infer she is alive somewhere.”Smart was last seen walking with Paul Flores close to residence halls on campus on May 25, 1996, after attending a celebration. But Peuvrelle advised jurors that Flores, a fellow Cal Poly scholar, had “hunted” her for months, incessantly showing the place she was, together with her dormitory.The night time of the occasion, he appeared out of the darkness to stroll her residence after she had handed out on a garden, the prosecutor stated.Mesick countered that Flores was there when Smart fell down and “he picked her up.” “He was doing a good deed. He was not hunting her,” he stated.Since the trial’s begin in July, Peuvrelle has sought to piece collectively a story of how throughout a four-day interval when Flores was not seen on campus, he allegedly eliminated Smart’s physique with the assistance of his father and buried it beneath the deck of his father’s Arroyo Grande home.Ruben Flores, the prosecutor alleged, stored individuals away from the deck for years. Then, in 2020, as police had been zeroing in on the home, a neighbor testified that she noticed a trailer again as much as the property.Peuvrelle stated that once they ultimately searched Flores’ residence after his arrest in April 2021, they discovered a “trophy room” with a number of objects tied to the Smart investigation, together with a observe that said, “Dig the yard.
”He confirmed a stark piece of floor beneath the deck and stated, “This is Kristin’s grave.”A soil scientist and archaeologist testified that floor radar confirmed an anomaly within the soil and indications of bodily fluids that had been in step with a physique having been buried and eliminated, the prosecutor repeatedly reminded jurors. The gap was 6 toes by 4 toes by 4 toes. Archaeologist Cindy Arrington, Peuvrelle famous, stated that the opening was dug by hand and that fluid had leaked into the soil slowly, making a bath-like ring.Showing slides from a PowerPoint presentation, Peuvrelle stated that a chemical take a look at carried out by an impartial lab confirmed “positive” for the presence of human blood within the soil, and that fibers recovered from the soil matched the colours of Smart’s clothes.Mesick, nonetheless, stated skilled protection witnesses indicated the blood take a look at that was used is invalid for soil. He stated there would have been gallons of liquid had a physique been there. “The amount of blood is so minuscule ... it could be anyone’s blood. It could be Ruben Flores’ blood,” he stated. “I am going to tell you it is not Kristin Smart’s blood.”To discover Flores responsible of accent to homicide, the jury should first discover that his son dedicated first- or second-degree homicide, Peuvrelle reminded jurors. Peuvrelle stated testimony from two ladies — recognized in the course of the trial as Sarah Doe and Rhonda Doe, who stated Paul Flores raped them decades after Smart vanished — helps the prosecution’s concept that Flores sexually assaulted Smart, then killed her and hid her physique.“The only truthful verdict is Ruben Flores is guilty of accessory,” Peuvrelle advised the jury. [ad_2] Source link
0 notes
bharatlivenewsmedia · 2 years
Text
Shiv Sena slams Akbaruddin Owaisi for visiting Aurangzeb's tomb; AIMIM says don't draw 'different inference'
Shiv Sena slams Akbaruddin Owaisi for visiting Aurangzeb’s tomb; AIMIM says don’t draw ‘different inference’
Shiv Sena slams Akbaruddin Owaisi for visiting Aurangzeb’s tomb; AIMIM says don’t draw ‘different inference’ Former Shiv Sena MP Chandrakant Khaire and the party’s Aurangabad district unit chief and MLC Ambadas Danve has taken a strong objection to Owaisi’s visit to the tomb. Former Shiv Sena MP Chandrakant Khaire and the party’s Aurangabad district unit chief and MLC Ambadas Danve has taken a…
View On WordPress
0 notes
lovemagics · 2 years
Text
Bury Rank Love Marriage Issue ArrangementOur administration name infers this is an online soothsaying administration, which is utilized for adoration marriage. In our administration, we will give our best online site to our administration.Bury Rank Love Marriage Issue ArrangementOn the off chance that you are confronting any sorts of issues identified with your relatives and relatives for doing love marriage then you can utilize our online crystal gazing administration.On the off chance that you have no cost or cost of giving us then you don’t have to stress over it since we give our administration on the web. In the wake of utilizing our administration, this gives an acceptable and positive outcome inside 1 month.Nedre Torggate 63015 DrammenNorway Bury Rank Love Marriage Issue Arrangement Our administration name infers this is an online soothsaying administration, which is utilized for adoration marriage. In our administration, we will give our best online site to our administration. Bury Rank Love Marriage Issue Arrangement On the off chance that you are confronting any sorts of issues identified with your relatives and relatives for doing love marriage then you can utilize our online crystal gazing administration. On the off chance that you have no cost or cost of giving us then you don’t have to stress over it since we give our administration on the web. In the wake of utilizing our administration, this gives an acceptable and positive outcome inside 1 month. Contact PAPA RAJESH for spells, dua prayers, magic rings, etc. WhatsApp 1: +27 836 650 046 WhatsApp 2: +27 82 357 2943 Nedre Torggate 63015 DrammenNorway Name Email Phone Number Service You Need Country Date of Birth Message Send
0 notes
vidmidnews · 5 years
Link
Senate Judiciary Committee Chairman Lindsey Graham (R-SC) said that he was done caring about special counsel Robert Mueller’s investigation into Russian inference in the 2016 presidential campaign and President Donald Trump’s repeated attempts
0 notes
jodyedgarus · 6 years
Text
How Shoddy Statistics Found A Home In Sports Research
Graphics by Ella Koeze
At first blush, the studies look reasonable enough. Low-intensity stretching seems to reduce muscle soreness. Beta-alanine supplements may boost performance in water polo players. Isokinetic strength training could improve swing kinematics in golfers. Foam rollers can reduce muscle soreness after exercise.
The problem: All of these studies shared a statistical analysis method unique to sports science. And that method is severely flawed.
The method is called magnitude-based inference, or MBI. Its creator, Will Hopkins, is a New Zealand exercise physiologist with decades of experience — experience that he has harnessed to push his methodology into the sports science mainstream. The methodology allows researchers to find effects more easily compared with traditional statistics, but the way in which it is conducted undermines the credibility of these results. That MBI has persisted as long as it has points to some of science’s vulnerabilities — and to how science can correct itself.
A commentary touting MBI that was published despite reviewers’ objections has been cited more than 2,500 times.
MBI was created to address an important problem. Science is hard, and sports science is particularly so. If you want to study, say, whether a sports drink or training method can improve athletic performance, you have to recruit a bunch of volunteers and convince them to come into the lab for a battery of time- and energy-intensive tests. These studies require engaged and, in many cases, highly fit athletes who are willing to disrupt their lives and normal training schedules to take part. As a result, it’s not unusual for a treatment to be tested on fewer than 10 people. Those small samples make it extremely difficult to distinguish the signal from the noise and even harder to detect the kind of small benefits that in sport could mean the difference between a gold medal and no medal at all.
Hopkins’s workaround for all of this, MBI, has no sound theoretical basis. It is an amalgam of two statistical approaches — frequentist and Bayesian — and relies on opaque formulas embedded in Excel spreadsheets1 into which researchers can input their data. The spreadsheets then calculate whether an observed effect is likely to be beneficial, trivial or harmful and use statistical calculations such as confidence intervals and effect sizes to produce probabilistic statements about a set of results.
In doing so, those spreadsheets often find effects where traditional statistical methods don’t. Hopkins views this as a benefit because it means that more studies turn up positive findings worth publishing. But others see it as a threat to sports science’s integrity because it increases the chances that those findings aren’t real.
A 2016 paper by Hopkins and collaborator Alan Batterham makes the case that MBI is superior to the standard statistical methods used in the field. But I’ve run it by about a half-dozen statisticians, and each has dismissed the pairs’ conclusions and the MBI method as invalid. “It’s basically a math trick that bears no relationship to the real world,” said Andrew Vickers, a statistician at Memorial Sloan Kettering Cancer Center. “It gives the appearance of mathematical rigor,” he said, by inappropriately combining two forms of statistical analysis using a mathematical oversimplification.
When I sent the paper to Kristin Sainani, a statistician at Stanford University, she got so riled up that she wrote a paper in Medicine & Science in Sports & Exercise (MSSE) outlining the problems with MBI. Sainani ran simulations showing that what MBI really does is lower the standard of evidence and increase the false positive rate. She details how this works in a 50-minute video; the chart below shows how these flaws play out in practice.
To highlight Sainani’s findings, MSSE commissioned an accompanying editorial,2 written by biostatistician Doug Everett, that said MBI is flawed and should be abandoned. Hopkins and his colleagues have yet to provide a sound theoretical basis for MBI, Everett told me. “I almost get the sense that this is a cult. The method has a loyal following in the sports and exercise science community, but that’s the only place that’s adopted it. The fact that it’s not accepted by the wider statistics community means something.”
How did this problematic method take hold among the sports science research community? In a perfect world, science would proceed as a dispassionate enterprise, marching toward truth and more concerned with what is right than with who is offering the theories. But scientists are human, and their passions, egos, loyalties and biases inevitably shape the way they do their work. The history of MBI demonstrates how forceful personalities with alluring ideas can muscle their way onto the stage.
The first explanation of MBI in the scientific literature came in a 2006 commentary that Hopkins and Batterham published in the International Journal of Sports Physiology and Performance. Two years later, it was rebutted in the same journal, when two statisticians said MBI “lacks a proper theoretical foundation” within the common, frequentist approach to statistics.
But Batterham and Hopkins were back in the late 2000s, when editors at Medicine & Science in Sports & Exercise (the flagship journal of the American College of Sports Medicine) invited them and two others to create a set of statistical guidelines for the journal. The guidelines recommended MBI (among other things), but the nine peer reviewers failed to reach a unanimous decision to accept the guidelines. Andrew Young, then editor in chief of MSSE, told me that their concerns weren’t only about MBI — some reviewers “felt the recommendations were too rigid and would be interpreted as rules for authors” — but “all reviewers expressed some concerns that MBI was controversial and not yet accepted by mainstream statistical folks.”
Young published the group’s guidelines as an invited commentary with an editor’s note disclosing that although most of the reviewers recommended publication of the article, “there remain several specific aspects of the discussion on which authors and reviewers strongly disagreed.” (In fact, three reviewers objected to publishing them at all.)3
“Will is a very enthusiastic man. He’s semi-retired and a lot older than most of the people he’s dealing with.”
Hopkins and Batterham continued to press their case from there. After Australian statisticians Alan Welsh and Emma Knight published an analysis of MBI in MSSE in 2014 concluding that the method was invalid and should not be used, Hopkins and Batterham responded with a post at Sportsci.org,4 “Magnitude-Based Inference Under Attack.” They then wrote a paper contending that “MBI is a trustworthy, nuanced alternative” to the standard method of statistical analysis, null-hypothesis significance testing. That paper was rejected by MSSE. (“I put it down to two things,” Hopkins told me of MBI critics. “Just plain ignorance and stupidity.”) Undeterred, Hopkins submitted it to Sports Science and said he “groomed” potential peer reviewers in advance by contacting them and encouraging them to “give it an honest appraisal.” The journal published it in 2016.
Which brings us to the last year of drama, which has featured a preprint on SportRxiv criticizing MBI, Sainani’s paper and more responses from Batterham and Hopkins, who dispute Sainani’s calculations and conclusions in a response at Sportsci.org titled “The Vindication of Magnitude-Based Inference.”5
Has all this back and forth given you whiplash? The papers themselves probably won’t help. They’re mostly technical and difficult to follow without a deep understanding of statistics. And like researchers in many other fields, most sports scientists don’t receive extensive training in stats and may not have the background to fully assess the arguments getting tossed around here. Which means the debate largely turns on tribalism. Whom are you going to believe? A bunch of statisticians from outside the field, or a well-established giant from within it?
For a while, Hopkins seemed to have the upper hand. That 2009 MSSE commentary touting MBI that was published despite reviewers’ objections has been cited more than 2,500 times, and many papers have used it as evidence for the MBI approach. Hopkins gives MBI seminars, and Victoria University offers an Applied Sports Statistics unit developed by Hopkins that has been endorsed by the British Association of Sport and Exercise Sciences and Exercise & Sports Science Australia.
“Will is a very enthusiastic man. He’s semi-retired and a lot older than most of the people he’s dealing with,” Knight said. She wrote her critique of MBI after becoming frustrated with researchers at the Australian Institute of Sport (where she worked at the time) coming to her with MBI spreadsheets. “They all very much believed in it, but nobody could explain it.”
These researchers believed in the spreadsheets because they believed in Hopkins — a respected physiologist who speaks with great confidence. He sells his method by highlighting the weaknesses of p-values and then promising that MBI can direct them to the things that really matter. “If you have very small sample sizes, it’s almost impossible to find statistical significance, but that doesn’t mean the effect isn’t there,” said Eric Drinkwater, a sports scientist at Deakin University in Australia who studied for his Ph.D. under Hopkins. “Will taught me about a better way,” he said. “It’s not about finding statistical significance — it’s about the magnitude of the change and is the effect a meaningful result.” (Drinkwater also said he is “prepared to accept that this is a controversial issue” — and perhaps will go with traditional measures such as confidence limits and effect sizes rather than using MBI.)
It’s easy to see MBI’s appeal beyond Hopkins, too. It promises to do the impossible: detect small effects in small sample sizes. Hopkins points to legitimate discussions about the limits of null-hypothesis significance testing as evidence that MBI is better. But this selling point is a sleight of hand. The fundamental problem it’s trying to tackle — gleaning meaningful information from studies with noisy and limited data sets — can’t be solved with new statistics. Although MBI does appear to extract more information from tiny studies, it does this by lowering the standard of evidence.
That’s not a healthy way to do science, Everett said. “Don’t you want it to be right? To call this ‘gaming the system’ is harsh, but that’s almost what it seems like.”
Sainani wonders, what’s the point? “Does just meeting a criteria such as ‘there’s some chance this thing works’ represent a standard we ever want to be using in science? Why do a study at all if this is the bar?”
Even without statistical issues, sports science faces a reliability problem. A 2017 paper published in the International Journal of Sports Physiology and Performance pointed to inadequate validation that surrogate outcomes really reflect what they’re meant to measure, a dearth of longitudinal and replication studies, the limited reporting of null or trivial results, and insufficient scientific transparency as other problems threatening the field’s reliability and validity.
All the back-and-forth arguments about error rate calculations distract from even more important issues, said Andrew Gelman, a statistician at Columbia University who said he agrees with Sainani that the paper claiming MBI’s validity “does not make sense.” “Scientists should be spending more time collecting good data and reporting their raw results for all to see and less time trying to come up with methods for extracting a spurious certainty out of noisy data.” To do that, sports scientists could work collectively to pool their resources, as psychology researchers have done, or find some other way to increase their sample sizes.
Until they do that, they will be engaged in an impossible task. There’s only so much information you can glean from a tiny sample.
from News About Sports https://fivethirtyeight.com/features/how-shoddy-statistics-found-a-home-in-sports-research/
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 26: A Conversation with Peter Lee
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Peter talk about defining intelligence, Venn diagrams, transfer learning, image recognition, and Xiaoice.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 26: A Conversation with Peter Lee”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(01-04-41)-peter-lee.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-1-1.jpeg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF’ };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
Byron Reese:  This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Peter Lee. He is a computer scientist and corporate Vice President at Microsoft Research. He leads Microsoft’s New Experiences and Technologies organization, or NExT, with the mission to create research powered technology and products and advance human knowledge through research. Prior to Microsoft, Dr. Lee held positions in both government and academia. At DARPA, he founded a division focused on R&D programs in computing and related areas. Welcome to the show, Peter. 
Peter Lee:  Thank you. It’s great to be here.
I always like to start with a seemingly simple question which turns out not to be quite so simple. What is artificial intelligence?
Wow. That is not a simple question at all. I guess the simple, one line answer is artificial intelligence is the science or the study of intelligent machines. And, I realize that definition is pretty circular, and I am guessing that you understand that that’s the fundamental difficulty, because it leaves open the question: what is intelligence? I think people have a lot of different ways to think about what is intelligence, but, in our world, intelligence is, “how do we compute how to set and achieve goals in the world.” And this is fundamentally what we’re all after, right now in AI.
That’s really fascinating because you’re right, there is no consensus definition on intelligence, or on life, or on death for that matter. So, I would ask that question: why do you think we have such a hard time defining what intelligence is?
I think we only have one model of intelligence, which is our own, and so when you think about trying to define intelligence it really comes down to a question of defining who we are. There’s fundamental discomfort with that. That fundamental circularity is difficult. If we were able to fly off in some starship to a far-off place, and find a different form of intelligence—or different species that we would recognize as intelligent—maybe we would have a chance to dispassionately study that, and come to some conclusions. But it’s a hard when you’re looking at something so introspective.
When you get into computer science research, at least here at Microsoft Research, you do have to find ways to focus on specific problems; so, we ended up focusing our research in AI—and our tech development in AI, roughly speaking—in four broad categories, and I think these categories are a little bit easier to grapple with. One is perception—that’s endowing machines with the ability to see and hear, much like we do. The second category is learning—how to get machines to get better with experience? The third is reasoning—how do you make inferences, logical inferences, commonsense inferences about the world? And then the fourth is language—how do we get machines to be intelligent in interacting with each other and with us through language? Those four buckets—perception, learning, reasoning and language—they don’t define what is intelligence, but they at least give us some kind of clear set of goals and directions to go after.
Well, I’m not going to spend too much time down in those weeds, but I think it’s really interesting. In what sense do you think it’s artificial? Because it’s either artificial in that it’s just mechanical—or that’s just a shorthand we use for that—or it’s artificial in that it’s not really intelligence. You’re using words like “see,” “hear,” and “reason.” Are you using those words euphemistically—can a computer really see or hear anything, or can it reason—or are you using them literally?
The question you’re asking really gets to the nub of things, because we really don’t know. If you were to draw the Venn diagram; you’d have a big circle and call that intelligence, and now you want to draw a circle for artificial intelligence—we don’t know if that circle is the same as the intelligence circle, whether it’s separate but overlapping, whether it’s a subset of intelligence… These are really basic questions that we debate, and people have different intuitions about, but we don’t really know. And then we get to what’s actually happening—what gets us excited and what is actually making it out into the real world, doing real things—and for the most part that has been a tiny subset of these big ideas; just focusing on machine learning, on learning from large amounts of data, models that are actually able to do some useful task, like recognize images.
Right. And I definitely want to go deep into that in just a minute, but I’m curious… So, there’s a wide range of views about AI. Should we fear it? Should we love it? Will it take us into a new golden age? Will it do this? Will it cap out? Is an AGI possible? All of these questions. 
And, I mean, if you ask, “How will we get to Mars?” Well, we don’t know exactly, but we kind of know. But if you ask, “What’s AI going to be like in fifty years?” it’s all over the map. And do you think that is because there isn’t agreement on the kinds of questions I’m asking—like people have different ideas on those questions—or are the questions I’m asking not really even germane to the day-to-day “get up and start building something”? 
I think there’s a lot of debate about this because the question is so important. Every technology is double-edged. Every technology has the ability to be used for both good purposes and for bad purposes, has good consequences and unintended consequences. And what’s interesting about computing technologies, generally, but especially with a powerful concept like artificial intelligence, is that in contrast to other powerful technologies—let’s say in the biological sciences, or in nuclear engineering, or in transportation and so on—AI has the potential to be highly democratized, to be codified into tools and technologies that literally every person on the planet can have access to. So, the question becomes really important: what kind of outcomes, what kinds of possibilities happen for this world when literally every person on the planet can have the power of intelligent machines at their fingertips? And because of that, all of the questions you’re asking become extremely large, and extremely important for us. People care about those futures, but ultimately, right now, our state of scientific knowledge is we don’t really know.
I sometimes talk in analogy about way, way back in the medieval times when Gutenberg invented mass-produced movable type, and the first printing press. And in a period of just fifty years, they went from thirty thousand books in all of Europe, to almost thirteen million books in all of Europe. It was sort of the first technological Moore’s Law. The spread of knowledge that that represented, did amazing things for humanity. It really democratized access to books, and therefore to a form of knowledge, but it was also incredibly disruptive in its time and has been since.
In a way, the potential we see with AI is very similar, and maybe even a bigger inflection point for humanity. So, while I can’t pretend to have any hard answers to the basic questions that you’re asking about the limits of AI and the nature of intelligence, it’s for sure important; and I think it’s a good thing that people are asking these questions and they’re thinking hard about it.
Well, I’m just going to ask you one more and then I want to get more down in the nitty-gritty. 
If the only intelligent thing we know of in the universe, the only general intelligence, is our brain, do you think it’s a settled question that that functionality can be reproduced mechanically? 
I think there is no evidence to the contrary. Every way that we look at what we do in our brains, we see mechanical systems. So, in principle, if we have enough understanding of how our own mechanical system of the brain works, then we should be able to, at a minimum, reproduce that. Now, of course, the way that technology develops, we tend to build things in different ways, and so I think it’s very likely that the kind of intelligent machines that we end up building will be different than our own intelligence. But there’s no evidence, at least so far, that would be contrary to the thesis that we can reproduce intelligence mechanically.
So, to say to take the opposite position for a moment. Somebody could say there’s absolutely no evidence to suggest that we can, for the following reasons. One, we don’t know how the brain works. We don’t know how thoughts are encoded. We don’t know how thoughts are retrieved. Aside from that, we don’t know how the mind works. We don’t know how it is that we have capabilities that seem to be beyond what a hunk of grey matter could do—we’re creative, we have a sense of humor and all these other things. We’re conscious, and we don’t even have a scientific language for understanding how consciousness could come about. We don’t even know how to ask that question or look for that answer, scientifically. So, somebody else might look at it and say, “There’s no reason whatsoever to believe we can reproduce it mechanically.” 
I’m going to use a quote here from, of all people, a non-technologist Samuel Goldwyn, the old movie magnate. And I always reach to this when I get put in a corner like you’re doing to me right now, which is, “It’s absolutely impossible, but it has possibilities.”
All right.
Our current understanding is that brains are fundamentally closed systems, and so we’re learning more and more, and in fact what we learn is loosely inspiring some of the things we’re doing in AI systems, and making progress. How far that goes? It’s really, as you say, it’s unclear because there are so many mysteries, but it sure looks like there are a lot of possibilities.
Now to get kind of down to the nitty-gritty, let’s talk about difficulties and where we’re being successful and where we’re not. My first question is, why do you think AI is so hard? Because humans acquire their intelligence seemingly simply, right? You put a little kid in playschool and you show them some red, and you show them the number three, and then, all of a sudden, they understand what three red things are. I mean, we, kind of, become intelligent so naturally, and yet my frequent flyer program that I call in can’t tell, when I’m telling it my number if I said 8 or H. Why do you think it’s so hard?
What you said is true, although it took you many years to reach that point. And even a child that’s able to do the kinds of things that you just expressed has had years of life. The kinds of expectations that we have, at least today—especially in the commercial sphere for our intelligent machines—sometimes there’s a little bit less patience. But having said that, I think what you’re saying is right.
I mentioned before this Venn diagram; so, there’s this big circle which is intelligence, and let’s just assume that there is some large subset of that which is artificial intelligence. Then you zoom way, way in, and a tiny little bubble inside that AI bubble is machine learning—this is just simply machines that get better with experience. And then a tiny bubble inside that tiny bubble is machine learning from data—where the models that are extracted, that codify what has been learned, are all extracted from analyzing large amounts of data. That’s really where we’re at today—in this tiny bubble, inside this tiny bubble, inside this big bubble we call artificial intelligence.
What is remarkable is that, despite how narrow our understanding is—for the most part all of the exciting progress is just inside this little, tiny, narrow idea of machine learning from data, and there’s even a smaller bubble inside that that’s called a supervised manner—even from that we’re seeing tremendous power, a tremendous ability to create new computing systems that do some pretty impressive and valuable things. It is pretty crazy just how valuable that’s become to companies, like Microsoft. At the same time, it is such a narrow little slice of what we understand of intelligence.
The simple examples that you mentioned, for example, like one-shot learning, where you can show a small child a cartoon picture of a fire truck, and even if that child has never seen a fire truck before in her life, you can take her out on the street, and the first real fire truck that goes down the road the child will instantly recognize as a fire truck. That sort of one-shot idea, you’re right, our current systems aren’t good at.
While we are so excited about how much progress we’re making on learning from data, there are all the other things that are wrapped up in intelligence that are still pretty mysterious to us, and pretty limited. Sometimes, when that matters, our limits get in the way, and it creates this idea that AI is actually still really hard.
You’re talking about transfer learning. Would you say that the reason she can do that is because at another time she saw a drawing of a banana, and then a banana? And another time she saw a drawing of a cat, and then a cat. And so, it wasn’t really a one-shot deal. 
How do you think transfer learning works in humans? Because that seems to be what we’re super good at. We can take something that we learned in one place and transfer that knowledge to another context. You know, “Find, in this picture, the Statue of Liberty covered in peanut butter,” and I can pick that out having never seen a Statue of Liberty in peanut butter, or anything like that. 
Do you think that’s a simple trick we don’t understand how to do yet? Is that what you want it to be, like an “a-ha” moment, where you discover the basic idea. Or do you think it’s a hundred tiny little hacks, and transfer learning in our minds is just, like, some spaghetti code written by some drunken programmer who was on a deadline, right? What do you think that is? Is it a simple thing, or is it a really convoluted, complicated thing? 
Transfer learning turns out to be incredibly interesting, scientifically, and also commercially for Microsoft, turns out to be something that we rely on in our business. What is kind of interesting is, when is transfer learning more generally applicable, versus being very brittle?
For example, in our speech processing systems, the actual commercial speech processing systems that Microsoft provides, we use transfer learning, routinely. When we train our speech systems to understand English speech, and then we train those same systems to understand Portuguese, or Mandarin, or Italian, we get a transfer learning effect, where the training for that second, and third, and fourth language requires less data and less computing power. And at the same time, each subsequent language that we add onto it improves the earlier languages. So, training that English-based system to understand Portuguese actually improves the performance of our speech systems in English, so there are transfer learning effects there.
In our image recognition tasks, there is something called the ImageNet competition that we participate in most years, and the last time that we competed was two years ago in 2015. There are five image processing categories. We trained our system to do well on Category 1—on the basic image classification—then we used transfer learning to not only win the first category, but to win all four other ImageNet competitions. And so, without any further kind of specialized training, there was a transfer learning effect.
Transfer learning actually does seem to happen. In our deep neural net, deep learning research activities, transfer learning effects—when we see them—are just really intoxicating. It makes you think about what you and I do as human beings.
At the same time, it seems to be this brittle thing. We don’t necessarily understand when and how this transfer learning effect is effective. The early evidence from studying these things is that there are different forms of learning, and that somehow the one-shot ideas that even small children are very good at, seem to be out of the purview of the deep neural net systems that we’re working on right now. Even this intuitive idea that you’ve expressed of transfer learning, the fact is we see it in some cases and it works so well and is even commercially-valuable to us, but then we also see simple transfer learning tasks where these systems just seem to fail. So, even those things are kind of mysterious to us right now.
It seems—and I don’t have any evidence to support this, but it seems, at a gut level to me—that maybe what you’re describing isn’t pure transfer learning, but rather what you’re saying is, “We built a system that’s really good at translating languages, and it works on a lot of different languages.” 
It seems to me that the essence of transfer learning is when you take it to a different discipline, for example, “Because I learned a second language, I am now a better artist. Because I learned a second language, I’m now a better cook.” That, somehow, we take things that are in a discipline, and they add to this richness and depth and indimensionality of our knowledge in a way that they really impact our relationships. 
I was chatting with somebody the other day who said that learning a second language was the most valuable thing he’d ever done, and that his personality in that second language is different than his English personality. I hear what you’re saying, and I think those are hits that point us in the right direction. But I wonder if, at its core, it’s really multidimensional, what humans do, and that’s why we can seemingly do the one-shot things, because we’re taking things that are absolutely unrelated to cartoon drawings of something relating to real life. Do you have even any kind of a gut reaction to that?
One thing, at least in our current understanding of the research fields, is that there is a difference between learning and reasoning. The example I like to go to is, we’ve done quite a bit of work on language understanding, and specifically in something called machine reading—where you want to be able to read text and then answer questions about the text. And a classic place where you look to test your machine reading capabilities is parts of the verbal part of the SAT exam. The nice thing about the SAT exam is you can try to answer the questions and you can measure the progress just through the score that you get on the test. That’s steadily improving, and not just here at Microsoft Research, but at quite a few great university research areas and centers.
Now, subject those same systems to, say, the third-grade California Achievement Test, and the intelligence systems just fall apart. If you look at what third graders are expected to be able to do, there is a level of commonsense reasoning that seems to be beyond what we try to do in our machine reading system. So, for example, one kind of question you’ll get on that third-grade achievement test is, maybe, four cartoon drawings: a ball sitting on the grass, some raindrops, an umbrella, and a puppy dog—and you have to know which pairs of things go together. Third-graders are expected to be able to make the right logical inferences from having the right life experiences, the right commonsense reasoning inferences to put those two pairs together, but we don’t actually have the AI systems that, reliably, are able to do that. That commonsense reasoning is something that seems to be—at least today, with the state of today’s scientific and technological knowledge—outside of the realm of machine learning. It’s not something that we think machine learning will ultimately be effective at.
That distinction is important to us, even commercially. I’m looking at an e-mail today that someone here at Microsoft sent me to get ready to talk to you today. The e-mail says, it’s right in front of me here, “Here is the briefing doc for tomorrow morning’s podcast. If you want to review it tonight, I’ll print it for you tomorrow.” Right now, the system has underlined, “want to review tonight,” and the reason it’s underlined that is it’s somehow made the logical commonsense inference that I might want a reminder on my calendar to review the briefing documents. But it’s remarkable that it’s managed to do that, because there are references to tomorrow morning as well as tonight. So, making those sorts of commonsense inferences, doing that reasoning, is still just incredibly hard, and really still requires a lot of craftsmanship by a lot of smart researchers to make real.
It’s interesting because you say, you had just one line in there that solving the third-grade problem isn’t a machine learning task, so how would we solve that? Or put another way, I often ask these Turing Test systems, “What’s bigger, a nickel or the sun?” and none of them have ever been able to answer it. Because “sun” is ambiguous, maybe, and “nickel” is ambiguous. 
In any case, if we don’t use machine learning for those, how do we get to the third grade? Or do we not even worry about the third grade? Because most of the problems we have in life aren’t third-grade problems, they’re 12th-grade problems that we really want the machines to be able to do. We want them to be able to translate documents, not match pictures of puppies. 
Well, for sure, if you just look at what companies like Microsoft, and the whole tech industry, are doing right now, we’re all seeing, I think, at least a decade, of incredible value to people in the world just with machine learning. There are just tremendous possibilities there, and so I think we are going to be very focused on machine learning and it’s going to matter a lot. It’s going to make people’s lives better, and it’s going to really provide a lot of commercial opportunities for companies like Microsoft. But that doesn’t mean that commonsense reasoning isn’t crucial, isn’t really important. Almost any kind of task that you might want help with—even simple things like making travel arrangements, shopping, or bigger issues like getting medical advice, advice about your own education—these things almost always involve some elements of what you would call commonsense reasoning, making inferences that somehow are not common, that are very particular and specific to you, and maybe haven’t been seen before in exactly that way.
Now, having said that, in the scientific community, in our research and amongst our researchers, there’s a lot of debate about how much of that kind of reasoning capability could be captured through machine learning, and how much of it could be captured simply by observing what people do for long enough and then just learning from it. But, for me at least, I see what is likely is that there’s a different kind of science that we’ll need to really develop much further if we want to capture that kind of commonsense reasoning.
Just to give you a sense of the debate, one thing that we’ve been doing—it’s been an experiment ongoing in China—is we have a new kind of chatbot technology in China that takes the form of a person named Xiaolce. Xiaolce is a persona that lives on social media in China, and actually has a large number of followers, tens of millions of followers.
Typically, when we think about chatbots and intelligent agents here in the US market—things like Cortana, or Siri, or Google Assistant, or Alexa—we put a lot of emphasis on semantic understanding; we really want the chatbot to understand what you’re saying at the semantic level. For Xiaolce, we ran a different experiment, and instead of trying to put in that level of semantic understanding, we instead looked at what people say on social media, and we used natural language processing to pick out statement response pairs, and templatize them, and put them in a large database. And so now, if you say something to Xiaolce in China, Xiaolce looks at what other people say in response to an utterance like that. Maybe it’ll come up with a hundred likely responses based on what other people have done, and then we use machine learning to rank order those likely responses, trying to optimize the enjoyment and engagement in the conversation, optimize the likelihood that the human being who is engaged in the conversation will stick with a conversation. Over time, Xiaolce has become extremely effective at doing that. In fact, for the top, say, twenty million people who interact with Xiaolce on a daily basis, the conversations are taking more than twenty-three turns.
What’s remarkable about that—and fuels the debate about what’s important in AI and what’s important in intelligence—is that at least the core of Xiaolce really doesn’t have any understanding at all about what you’re talking about. In a way, it’s just very intelligently mimicking what other people do in successful conversations. It raises the question, when we’re talking about machines and machines that at least appear to be intelligent, what’s really important? Is it really a purely mechanical, syntactic system, like we’re experimenting with Xiaolce, or is it something where we want to codify and encode our semantic understanding of the world and the way it works, the way we’re doing, say, with Cortana.
These are fundamental debates in AI. What’s sort of cool, at least in my day-to-day work here at Microsoft, is we are in a position where we’re able, and allowed, to do fundamental research in these things, but also build and deploy very large experiments just to see what happens and to try to learn from that. It’s pretty cool. At the same time, I can’t say that leaves me with clear answers yet. Not yet. It just leaves me with great experiences and we’re sharing what we’re learning with the world but it’s much, much harder to then say, definitively, what these things mean.
You know, it’s true. In 1950 Alan Turing said, “Can a machine think?” And that’s still a question that many can’t agree on because they don’t necessarily agree on the terms. But you’re right, that chatbot could pass the Turing Test, in theory. At twenty-three turns, if you didn’t tell somebody it was a chatbot, maybe it would pass it. 
But you’re right that that’s somehow unsatisfying that this is somehow this big milestone. Because if you saw it as a user in slow motion—that you ask a question, and then it did a query, and then it pulled back a hundred things and it rank ordered them, and looked for how many of those had successful follow-ups, and thumbs up, and smiley faces, and then it gave you one… It’s that whole thing about, once you know how the magic trick works, it isn’t nearly as interesting. 
It’s true. And with respect to achieving goals, or completing tasks in the world with the help of the Xiaolce chatbot, well, in some cases it’s pretty amazing how helpful Xiaolce is to people. If someone says, “I’m in the market for a new smartphone, I’m looking for a larger phablet, but I still want it to fit in my purse,” Xiaolce is amazingly effective at giving you a great answer to that question, because it’s something that a lot of people talk about when they’re shopping for a new phone.
At the same time, Xiaolce might not be so good at helping you decide which hotels to stay in, or helping you arrange your next vacation. It might provide some guidance, but maybe not exactly the right guidance that’s been well thought out. One more thing to say about this is, today—at least at the scale and practicality that we’re talking about—for the most part, we’re learning from data, and that data is essentially the digital exhaust from human thought and activity. There’s also another sense in which Xiaolce, while it passes the Turing Test, it’s also, in some ways, limited by human intelligence, because almost everything it’s able to do is observed and learned from what other people have done. We can’t discount the possibility of future systems which are less data dependent, and are able to just understand the structure of the world, and the problems, and learn from that.
Right. I guess Xiaolce wouldn’t know the difference, “What’s bigger, a nickel or the sun?” 
That’s right, yes.
Unless the transcript of this very conversation were somehow part of the training set, but you notice, I’ve never answered it. I’ve never given the answer away, so, it still wouldn’t know. 
We should try the experiment at some point.
Why do you think we personify these AIs? You know about Weizenbaum and ELIZA and all of that, I assume. He got deeply disturbed when people were relating to a lie, knowing it was a chatbot. He got deeply concerned that people poured out their heart to it, and he said that when the machine says, “I understand,” it’s just a lie. That there’s no “I,” and there’s nothing that “understands” anything. Do you think that somehow confuses relationships with people and that there are unintended consequences to the personification of these technologies that we don’t necessarily know about yet? 
I’m always internally scolding myself for falling into this tendency to anthropomorphize our machine learning and AI systems, but I’m not alone. Even the most hardened, grounded researcher and scientist does this. I think this is something that is really at the heart of what it means to be human. The fundamental fascination that we have and drive to propagate our species is surfaced as a fascination with building autonomous intelligent beings. It’s not just AI, but it goes back to the Frankenstein kinds of stories that have just come up in different guises, and different forms throughout, really, all of human history.
I think we just have a tremendous drive to build machines, or other objects and beings, that somehow capture and codify, and therefore promulgate, what it means to be human. And nothing defines that more for us than some sort of codification of human intelligence, and especially human intelligence that is able to be autonomous, make its own decisions, make its own choices moving forward. It’s just something that is so primal in all of us. Even in AI research, where we really try to train ourselves and be disciplined about not making too many unfounded connections to biological systems, we fall into the language of biological intelligence all the time. Even the four categories I mentioned at the outset of our conversation—perception, learning, reasoning, language—these are pretty biologically inspired words. I just think it’s a very deep part of human nature.
That could well be the case. I have a book coming out on AI in April of 2018 that talks about these questions, and there’s a whole chapter about how long we’ve been doing this. And you’re right, it goes back to the Greeks, and the eagle that allegedly plucked out Prometheus’ liver every day, in some accounts, was a robot. There’s just tons of them. The difference of course, now, is that, up until a few years ago, it was all fiction, and so these were just stories. And we don’t necessarily want to build everything that we can imagine in fiction. I still wrestle with it, that, somehow, we are going to convolute humans and machines in a way which might be to the detriment of humans, and not to the ennobling of the machine, but time will tell. 
Every technology, as we discussed earlier, is double-edged. Just to strike an optimistic note here—to your last comment, which is, I think, very important—I do think that this is an area where people are really thinking hard about the kinds of issues you just raised. I think that’s in contrast to what was happening in computer science and the tech industry even just a decade ago, where there’s more or less an ethos of, “Technology is good and more technology is better.” I think now there’s much more enlightenment about this. I think we can’t impede the progress of science and technology development, but what is so good and so important is that, at least as a society, we’re really trying to be thoughtful about both the potential for good, as well as the potential for bad that comes out of all of this. I think that gives us a much better chance that we’ll get more of the good.
I would agree. I think the only other corollary to this, where there’s been so much philosophical discussion about the implications of the technology, is the harnessing of the atom. If you read the contemporary literature written at the time, people were like, “It could be energy too cheap to meter, or it could be weapons of colossal destruction, or it could be both.” There was a precedent there for a long and thoughtful discussion about the implications of the technology. 
It’s funny you mentioned that because that reminds me of another favorite quote of mine which is from Albert Einstein, and I’m sure you’re familiar with it. “The difference between stupidity and genius is that genius has its limits.”
That’s good. 
And of course, he said that at the same time that a lot of this was developing. It was a pithy way to tell the scientific community, and the world, that we need to be thoughtful and careful. And I think we’re doing that today. I think that’s emerging very much so in the field of AI.
There’s a lot of practical concern about the effect of automation on employment, and these technologies on the planet. Do you have an opinion on how that’s all going to unfold? 
Well, for sure, I think it’s very likely that there’s going to be massive disruptions in how the world works. I mentioned the printing press, the Gutenberg press, movable type; there was incredible disruption there. When you have nine doublings in the spread of books and printing presses in the period of fifty years, that’s a real medieval Moore’s Law. And if you think about the disruptive effect of that, by the early 1500s, the whole notion of what it meant to educate your children suddenly involved making sure that they could read and write. That’s a skill that takes a lot of expense, and years of formal training and it has this sort of destructive impact. So, while the overall impact on the world and society was hugely positive—really the printing press laid the foundation for the Age of Enlightenment and the Renaissance—it had an absolutely disruptive effect on what it meant and what it took for people to succeed in the world.
AI, I’m pretty sure, is going to have the same kind of disruptive effect, because it has the same sort of democratizing force that the spread of books has had. And so, for us, we’ve been trying very hard to keep the focus on, “What can we do to put AI in the hands of people, that really empowers them, and augments what they’re able to do? What are the codifications of AI technologies that enable people to be more successful in whatever they’re pursuing in life?” And that focus, that intent by our research labs and by our company, I think, is incredibly important, because it takes a lot of the inventive and innovative genius that we have access to, and tries to point it in the right direction.
Talk to me about some of the interesting work you’re doing right now. Start with the healthcare stuff, what can you tell us about that?
Healthcare is just incredibly interesting. I think there are maybe three areas that just really get me excited. One is just fundamental life sciences, where we’re seeing some amazing opportunities and insights being unlocked through the use of machine learning, large-scale machine, and data analytics—the data that’s being produced increasingly cheaply through, say, gene sequencing, and through our ability to measure signals in the brain. What’s interesting about these things is that, over and over again, in other areas, if you put great innovative research minds and machine learning experts together with data and computing infrastructure, you get this burst of unplanned and unexpected innovations. Right now, in healthcare, we’re just getting to the point where we’re able to arrange the world in such a way that we’re able to get really interesting health data into the hands of these innovators, and genomics is one area that’s super interesting there.
Then, there is the basic question of, “What happens in the day-to-day lives of doctors and nurses?” Today, doctors are spending an average—there are several recent studies about this—of one hundred and eight minutes a day just entering health data into electronic health record systems. This is an incredible burden on those doctors, though it’s very important because it’s managed to digitize people’s health histories. But we’re now seeing an amazing ability for intelligent machines to just watch and listen to the conversation that goes on between the doctor and the patient, and to dramatically reduce the burden of all of that record keeping on doctors. So, doctors can stop being clerks and record keepers, and instead actually start to engage more personally with their patients.
And then the third area which I’m very excited about, but maybe is a little more geeky, is determining how we can create a system, how can we create a cloud, where more data is open to more innovators, where great researchers at universities, great innovators at startups who really want to make a difference in health, can provide a platform and a cloud where we can supply them with access to lots of valuable data, so they can innovate, they can create models that do amazing things.
Those three things just all really get me excited because the combination of these things I think can really make the lives of doctors, and nurses, and other clinicians better; can really lead to new diagnostics and therapeutic technologies, and unleash the potential of great minds and innovators. Stepping back for a minute, it really just amounts to creating systems that allow innovators, data, and computing infrastructure to all come together in one place, and then just having the faith that when you do that, great things will happen. Healthcare is just a huge opportunity area for doing this, that I’ve just become really passionate about.
I guess we will reach a point where you can have essentially the very best doctor in the world in your smartphone, and the very best psychologist, and the very best physical therapist, and the very best everything, right? All available at essentially no cost. I guess the internet always provided, at some abstract level, all of that information if you had an infinite amount of time and patience to find it. And the promise of AI, the kinds of things you’re doing, is that it was that difference, what did you say, between learning and reasoning, that it kind of bridges that gap. So, paint me a picture of what you think, just in the healthcare arena, the world of tomorrow will look like. What’s the thing that gets you excited? 
I don’t actually see healthcare ever getting away from being an essentially human-to-human activity. That’s something very important. In fact, I predict that healthcare will still be largely a local activity where it’s something that you will fundamentally access from another person in your locality. There are lots of reasons for this, but there’s something so personal about healthcare that it ends up being based in relationships. I see AI in the future relieving senseless and mundane burden from the heroes in healthcare—the doctors, and nurses, and administrators, and so on—that provide that personal service.
So, for example, we’ve been experimenting with a number of healthcare organizations with our chatbot technology. That chatbot technology is able to answer—on demand, through a conversation with a patient—routine and mundane questions about some health issue that comes up. It can do a, kind of, mundane textbook triage, and then, once all that is done, make an intelligent connection to a local healthcare provider, summarize very efficiently for the healthcare provider what’s going on, and then really allow the full creative potential and attention of the healthcare provider to be put to good use.
Another thing that we’ll be showing off to the world at a major radiology conference next week is the use of computer vision and machine learning to learn the habits and tricks of the trade for radiologists, that are doing radiation therapy planning. Right now, radiation therapy planning involves, kind of, a pixel by pixel clicking on radiological images that is extremely important; it has to be done precisely, but also has some artistry. Every good radiologist has his or her different kinds of approaches to this. So, one nice thing about the machine learning basic computer vision today, is that you can actually observe and learn what radiologists do, their practices, and then dramatically accelerate and relieve a lot of the mundane efforts, so that instead of two hours of work that is largely mundane with only maybe fifteen minutes of that being very creative, we can automate the noncreative aspects of this, and allow the radiologists to devote that full fifteen minutes, or even half an hour to really thinking through the creative aspects of radiology. So, it’s more of an empowerment model rather than replacing those healthcare workers. It still relies on human intuition; it still relies on human creativity, but hopefully allows more of that intuition, and more of that creativity to be harnessed by taking away some of the mundane, and time-consuming aspects of things.
These are approaches that I view as very human-focused, very humane ways to, not just make healthcare workers more productive, but to make them happier and more satisfied in what they do every day. Unlocking that with AI is just something that I feel is incredibly important. And it’s not just us here at Microsoft that are thinking this way, I’m seeing some really enlightened work going on, especially with some of our academic collaborators in this way. I find it truly inspiring to see what might be possible. Basically, I’m pushing back on the idea that we’ll be able to replace doctors, replace nurses. I don’t think that’s the world that we want, and I don’t even know that that’s the right idea. I don’t think that that necessarily leads to better healthcare.
To be clear, I’m talking about the great, immense parts of the world where there aren’t enough doctors for people, where there is this vast shortage of medical professionals, to somehow fill that gap, surely the technology can do that.
Yes. I think access is great. Even with some of the health chatbot pilot deployments that we’ve been experimenting with right now, you can just see that potential. If people are living in parts of the world where they have access issues, it’s an amazing and empowering thing to be able to just send a message to chatbot that’s always available and ready to listen, and answer questions. Those sorts of things, for sure, can make a big difference. At the same time, the real payoff is when technologies like that then enable healthcare workers—really great doctors, really great clinicians—to clear enough on their plate that their creative potential becomes available to more people; and so, you win on both ends. You win both on an instant access through automation, but you can also have a potential to win by expanding and enhancing the throughput and the number of patients that the clinics and clinicians can deal with. It’s a win-win situation in that respect.
Well said and I agree. It sounds like overall you are bullish on the future, you’re optimistic about the future and you think this technology overall is a force for great good, or am I just projecting that on to you? 
I’d say we think a lot about this. I would say, in my own career, I’ve had to confront both the good and bad outcomes, both the positive and unintended consequences of technology. I remember when I was back at DARPA—I arrived at DARPA in 2009—and in the summer of 2009, there was an election in Iran where the people in Iran felt that the results were not valid. This sparked what has been called the Iranian Twitter revolution. And what was interesting about the Iranian Twitter revolution is that people were using social media, Friendster and Twitter, in order to protest the results of this election and to organize protests.
This came to my attention at DARPA, through the State Department, because it became apparent that US-developed technologies to detect cyber intrusions and to help protect corporate networks were being used by the Iranian regime to hunt down and prosecute people who were using social media to organize these protests. The US took very quick steps to stop the sale of these technologies. But the thing that’s important is that these technologies, I’m pretty sure, were developed with only the best of intentions in mind—to help make computer networks safer. So, the idea that these technologies could be used to suppress free speech and freedom of assembly was, I’m sure never contemplated.
This really, kind of, highlights the double-edged nature of technology. So, for sure, we try to bring that thoughtfulness into every single research project we have across Microsoft Research, and that motivates our participation in things like the partnership on AI that involves a large number of industry and academic players, because we always want to have the technology, industry, and the research world be more and more thoughtful and enlightened on these ideas. So, yes, we’re optimistic. I’m optimistic certainly about the future, but that optimism, I think, is founded on a good dose of reality that if we don’t actually take proactive steps to be enlightened, on both the good and bad possibilities, good and bad outcomes, then the good things don’t just happen on their own automatically. So, it’s something that we work at, I guess, is the bottom line for what I’m trying to say. It’s earned optimism.
I like that. “Earned optimism,” I like that. It looks like we are out of time. I want to thank you for an hour of fascinating conversation about all of these topics. 
It was really fascinating, and you’ve asked some of the hardest question of the day. It was a challenge, and tons of fun to noodle on them with you.
Like, “What is bigger, the sun or a nickel?” Turns out that’s a very hard question.
I’m going to ask Xiaolce that question and I’ll let you know what she says.
All right. Thank you again.
Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 26: A Conversation with Peter Lee syndicated from http://ift.tt/2wBRU5Z
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 26: A Conversation with Peter Lee
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Peter talk about defining intelligence, Venn diagrams, transfer learning, image recognition, and Xiaoice.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 26: A Conversation with Peter Lee”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(01-04-41)-peter-lee.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-1-1.jpeg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
Byron Reese:  This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Peter Lee. He is a computer scientist and corporate Vice President at Microsoft Research. He leads Microsoft’s New Experiences and Technologies organization, or NExT, with the mission to create research powered technology and products and advance human knowledge through research. Prior to Microsoft, Dr. Lee held positions in both government and academia. At DARPA, he founded a division focused on R&D programs in computing and related areas. Welcome to the show, Peter. 
Peter Lee:  Thank you. It’s great to be here.
I always like to start with a seemingly simple question which turns out not to be quite so simple. What is artificial intelligence?
Wow. That is not a simple question at all. I guess the simple, one line answer is artificial intelligence is the science or the study of intelligent machines. And, I realize that definition is pretty circular, and I am guessing that you understand that that’s the fundamental difficulty, because it leaves open the question: what is intelligence? I think people have a lot of different ways to think about what is intelligence, but, in our world, intelligence is, “how do we compute how to set and achieve goals in the world.” And this is fundamentally what we’re all after, right now in AI.
That’s really fascinating because you’re right, there is no consensus definition on intelligence, or on life, or on death for that matter. So, I would ask that question: why do you think we have such a hard time defining what intelligence is?
I think we only have one model of intelligence, which is our own, and so when you think about trying to define intelligence it really comes down to a question of defining who we are. There’s fundamental discomfort with that. That fundamental circularity is difficult. If we were able to fly off in some starship to a far-off place, and find a different form of intelligence—or different species that we would recognize as intelligent—maybe we would have a chance to dispassionately study that, and come to some conclusions. But it’s a hard when you’re looking at something so introspective.
When you get into computer science research, at least here at Microsoft Research, you do have to find ways to focus on specific problems; so, we ended up focusing our research in AI—and our tech development in AI, roughly speaking—in four broad categories, and I think these categories are a little bit easier to grapple with. One is perception—that’s endowing machines with the ability to see and hear, much like we do. The second category is learning—how to get machines to get better with experience? The third is reasoning—how do you make inferences, logical inferences, commonsense inferences about the world? And then the fourth is language—how do we get machines to be intelligent in interacting with each other and with us through language? Those four buckets—perception, learning, reasoning and language—they don’t define what is intelligence, but they at least give us some kind of clear set of goals and directions to go after.
Well, I’m not going to spend too much time down in those weeds, but I think it’s really interesting. In what sense do you think it’s artificial? Because it’s either artificial in that it’s just mechanical—or that’s just a shorthand we use for that—or it’s artificial in that it’s not really intelligence. You’re using words like “see,” “hear,” and “reason.” Are you using those words euphemistically—can a computer really see or hear anything, or can it reason—or are you using them literally?
The question you’re asking really gets to the nub of things, because we really don’t know. If you were to draw the Venn diagram; you’d have a big circle and call that intelligence, and now you want to draw a circle for artificial intelligence—we don’t know if that circle is the same as the intelligence circle, whether it’s separate but overlapping, whether it’s a subset of intelligence… These are really basic questions that we debate, and people have different intuitions about, but we don’t really know. And then we get to what’s actually happening—what gets us excited and what is actually making it out into the real world, doing real things—and for the most part that has been a tiny subset of these big ideas; just focusing on machine learning, on learning from large amounts of data, models that are actually able to do some useful task, like recognize images.
Right. And I definitely want to go deep into that in just a minute, but I’m curious… So, there’s a wide range of views about AI. Should we fear it? Should we love it? Will it take us into a new golden age? Will it do this? Will it cap out? Is an AGI possible? All of these questions. 
And, I mean, if you ask, “How will we get to Mars?” Well, we don’t know exactly, but we kind of know. But if you ask, “What’s AI going to be like in fifty years?” it’s all over the map. And do you think that is because there isn’t agreement on the kinds of questions I’m asking—like people have different ideas on those questions—or are the questions I’m asking not really even germane to the day-to-day “get up and start building something”? 
I think there’s a lot of debate about this because the question is so important. Every technology is double-edged. Every technology has the ability to be used for both good purposes and for bad purposes, has good consequences and unintended consequences. And what’s interesting about computing technologies, generally, but especially with a powerful concept like artificial intelligence, is that in contrast to other powerful technologies—let’s say in the biological sciences, or in nuclear engineering, or in transportation and so on—AI has the potential to be highly democratized, to be codified into tools and technologies that literally every person on the planet can have access to. So, the question becomes really important: what kind of outcomes, what kinds of possibilities happen for this world when literally every person on the planet can have the power of intelligent machines at their fingertips? And because of that, all of the questions you’re asking become extremely large, and extremely important for us. People care about those futures, but ultimately, right now, our state of scientific knowledge is we don’t really know.
I sometimes talk in analogy about way, way back in the medieval times when Gutenberg invented mass-produced movable type, and the first printing press. And in a period of just fifty years, they went from thirty thousand books in all of Europe, to almost thirteen million books in all of Europe. It was sort of the first technological Moore’s Law. The spread of knowledge that that represented, did amazing things for humanity. It really democratized access to books, and therefore to a form of knowledge, but it was also incredibly disruptive in its time and has been since.
In a way, the potential we see with AI is very similar, and maybe even a bigger inflection point for humanity. So, while I can’t pretend to have any hard answers to the basic questions that you’re asking about the limits of AI and the nature of intelligence, it’s for sure important; and I think it’s a good thing that people are asking these questions and they’re thinking hard about it.
Well, I’m just going to ask you one more and then I want to get more down in the nitty-gritty. 
If the only intelligent thing we know of in the universe, the only general intelligence, is our brain, do you think it’s a settled question that that functionality can be reproduced mechanically? 
I think there is no evidence to the contrary. Every way that we look at what we do in our brains, we see mechanical systems. So, in principle, if we have enough understanding of how our own mechanical system of the brain works, then we should be able to, at a minimum, reproduce that. Now, of course, the way that technology develops, we tend to build things in different ways, and so I think it’s very likely that the kind of intelligent machines that we end up building will be different than our own intelligence. But there’s no evidence, at least so far, that would be contrary to the thesis that we can reproduce intelligence mechanically.
So, to say to take the opposite position for a moment. Somebody could say there’s absolutely no evidence to suggest that we can, for the following reasons. One, we don’t know how the brain works. We don’t know how thoughts are encoded. We don’t know how thoughts are retrieved. Aside from that, we don’t know how the mind works. We don’t know how it is that we have capabilities that seem to be beyond what a hunk of grey matter could do—we’re creative, we have a sense of humor and all these other things. We’re conscious, and we don’t even have a scientific language for understanding how consciousness could come about. We don’t even know how to ask that question or look for that answer, scientifically. So, somebody else might look at it and say, “There’s no reason whatsoever to believe we can reproduce it mechanically.” 
I’m going to use a quote here from, of all people, a non-technologist Samuel Goldwyn, the old movie magnate. And I always reach to this when I get put in a corner like you’re doing to me right now, which is, “It’s absolutely impossible, but it has possibilities.”
All right.
Our current understanding is that brains are fundamentally closed systems, and so we’re learning more and more, and in fact what we learn is loosely inspiring some of the things we’re doing in AI systems, and making progress. How far that goes? It’s really, as you say, it’s unclear because there are so many mysteries, but it sure looks like there are a lot of possibilities.
Now to get kind of down to the nitty-gritty, let’s talk about difficulties and where we’re being successful and where we’re not. My first question is, why do you think AI is so hard? Because humans acquire their intelligence seemingly simply, right? You put a little kid in playschool and you show them some red, and you show them the number three, and then, all of a sudden, they understand what three red things are. I mean, we, kind of, become intelligent so naturally, and yet my frequent flyer program that I call in can’t tell, when I’m telling it my number if I said 8 or H. Why do you think it’s so hard?
What you said is true, although it took you many years to reach that point. And even a child that’s able to do the kinds of things that you just expressed has had years of life. The kinds of expectations that we have, at least today—especially in the commercial sphere for our intelligent machines—sometimes there’s a little bit less patience. But having said that, I think what you’re saying is right.
I mentioned before this Venn diagram; so, there’s this big circle which is intelligence, and let’s just assume that there is some large subset of that which is artificial intelligence. Then you zoom way, way in, and a tiny little bubble inside that AI bubble is machine learning—this is just simply machines that get better with experience. And then a tiny bubble inside that tiny bubble is machine learning from data—where the models that are extracted, that codify what has been learned, are all extracted from analyzing large amounts of data. That’s really where we’re at today—in this tiny bubble, inside this tiny bubble, inside this big bubble we call artificial intelligence.
What is remarkable is that, despite how narrow our understanding is—for the most part all of the exciting progress is just inside this little, tiny, narrow idea of machine learning from data, and there’s even a smaller bubble inside that that’s called a supervised manner—even from that we’re seeing tremendous power, a tremendous ability to create new computing systems that do some pretty impressive and valuable things. It is pretty crazy just how valuable that’s become to companies, like Microsoft. At the same time, it is such a narrow little slice of what we understand of intelligence.
The simple examples that you mentioned, for example, like one-shot learning, where you can show a small child a cartoon picture of a fire truck, and even if that child has never seen a fire truck before in her life, you can take her out on the street, and the first real fire truck that goes down the road the child will instantly recognize as a fire truck. That sort of one-shot idea, you’re right, our current systems aren’t good at.
While we are so excited about how much progress we’re making on learning from data, there are all the other things that are wrapped up in intelligence that are still pretty mysterious to us, and pretty limited. Sometimes, when that matters, our limits get in the way, and it creates this idea that AI is actually still really hard.
You’re talking about transfer learning. Would you say that the reason she can do that is because at another time she saw a drawing of a banana, and then a banana? And another time she saw a drawing of a cat, and then a cat. And so, it wasn’t really a one-shot deal. 
How do you think transfer learning works in humans? Because that seems to be what we’re super good at. We can take something that we learned in one place and transfer that knowledge to another context. You know, “Find, in this picture, the Statue of Liberty covered in peanut butter,” and I can pick that out having never seen a Statue of Liberty in peanut butter, or anything like that. 
Do you think that’s a simple trick we don’t understand how to do yet? Is that what you want it to be, like an “a-ha” moment, where you discover the basic idea. Or do you think it’s a hundred tiny little hacks, and transfer learning in our minds is just, like, some spaghetti code written by some drunken programmer who was on a deadline, right? What do you think that is? Is it a simple thing, or is it a really convoluted, complicated thing? 
Transfer learning turns out to be incredibly interesting, scientifically, and also commercially for Microsoft, turns out to be something that we rely on in our business. What is kind of interesting is, when is transfer learning more generally applicable, versus being very brittle?
For example, in our speech processing systems, the actual commercial speech processing systems that Microsoft provides, we use transfer learning, routinely. When we train our speech systems to understand English speech, and then we train those same systems to understand Portuguese, or Mandarin, or Italian, we get a transfer learning effect, where the training for that second, and third, and fourth language requires less data and less computing power. And at the same time, each subsequent language that we add onto it improves the earlier languages. So, training that English-based system to understand Portuguese actually improves the performance of our speech systems in English, so there are transfer learning effects there.
In our image recognition tasks, there is something called the ImageNet competition that we participate in most years, and the last time that we competed was two years ago in 2015. There are five image processing categories. We trained our system to do well on Category 1—on the basic image classification—then we used transfer learning to not only win the first category, but to win all four other ImageNet competitions. And so, without any further kind of specialized training, there was a transfer learning effect.
Transfer learning actually does seem to happen. In our deep neural net, deep learning research activities, transfer learning effects—when we see them—are just really intoxicating. It makes you think about what you and I do as human beings.
At the same time, it seems to be this brittle thing. We don’t necessarily understand when and how this transfer learning effect is effective. The early evidence from studying these things is that there are different forms of learning, and that somehow the one-shot ideas that even small children are very good at, seem to be out of the purview of the deep neural net systems that we’re working on right now. Even this intuitive idea that you’ve expressed of transfer learning, the fact is we see it in some cases and it works so well and is even commercially-valuable to us, but then we also see simple transfer learning tasks where these systems just seem to fail. So, even those things are kind of mysterious to us right now.
It seems—and I don’t have any evidence to support this, but it seems, at a gut level to me—that maybe what you’re describing isn’t pure transfer learning, but rather what you’re saying is, “We built a system that’s really good at translating languages, and it works on a lot of different languages.” 
It seems to me that the essence of transfer learning is when you take it to a different discipline, for example, “Because I learned a second language, I am now a better artist. Because I learned a second language, I’m now a better cook.” That, somehow, we take things that are in a discipline, and they add to this richness and depth and indimensionality of our knowledge in a way that they really impact our relationships. 
I was chatting with somebody the other day who said that learning a second language was the most valuable thing he’d ever done, and that his personality in that second language is different than his English personality. I hear what you’re saying, and I think those are hits that point us in the right direction. But I wonder if, at its core, it’s really multidimensional, what humans do, and that’s why we can seemingly do the one-shot things, because we’re taking things that are absolutely unrelated to cartoon drawings of something relating to real life. Do you have even any kind of a gut reaction to that?
One thing, at least in our current understanding of the research fields, is that there is a difference between learning and reasoning. The example I like to go to is, we’ve done quite a bit of work on language understanding, and specifically in something called machine reading—where you want to be able to read text and then answer questions about the text. And a classic place where you look to test your machine reading capabilities is parts of the verbal part of the SAT exam. The nice thing about the SAT exam is you can try to answer the questions and you can measure the progress just through the score that you get on the test. That’s steadily improving, and not just here at Microsoft Research, but at quite a few great university research areas and centers.
Now, subject those same systems to, say, the third-grade California Achievement Test, and the intelligence systems just fall apart. If you look at what third graders are expected to be able to do, there is a level of commonsense reasoning that seems to be beyond what we try to do in our machine reading system. So, for example, one kind of question you’ll get on that third-grade achievement test is, maybe, four cartoon drawings: a ball sitting on the grass, some raindrops, an umbrella, and a puppy dog—and you have to know which pairs of things go together. Third-graders are expected to be able to make the right logical inferences from having the right life experiences, the right commonsense reasoning inferences to put those two pairs together, but we don’t actually have the AI systems that, reliably, are able to do that. That commonsense reasoning is something that seems to be—at least today, with the state of today’s scientific and technological knowledge—outside of the realm of machine learning. It’s not something that we think machine learning will ultimately be effective at.
That distinction is important to us, even commercially. I’m looking at an e-mail today that someone here at Microsoft sent me to get ready to talk to you today. The e-mail says, it’s right in front of me here, “Here is the briefing doc for tomorrow morning’s podcast. If you want to review it tonight, I’ll print it for you tomorrow.” Right now, the system has underlined, “want to review tonight,” and the reason it’s underlined that is it’s somehow made the logical commonsense inference that I might want a reminder on my calendar to review the briefing documents. But it’s remarkable that it’s managed to do that, because there are references to tomorrow morning as well as tonight. So, making those sorts of commonsense inferences, doing that reasoning, is still just incredibly hard, and really still requires a lot of craftsmanship by a lot of smart researchers to make real.
It’s interesting because you say, you had just one line in there that solving the third-grade problem isn’t a machine learning task, so how would we solve that? Or put another way, I often ask these Turing Test systems, “What’s bigger, a nickel or the sun?” and none of them have ever been able to answer it. Because “sun” is ambiguous, maybe, and “nickel” is ambiguous. 
In any case, if we don’t use machine learning for those, how do we get to the third grade? Or do we not even worry about the third grade? Because most of the problems we have in life aren’t third-grade problems, they’re 12th-grade problems that we really want the machines to be able to do. We want them to be able to translate documents, not match pictures of puppies. 
Well, for sure, if you just look at what companies like Microsoft, and the whole tech industry, are doing right now, we’re all seeing, I think, at least a decade, of incredible value to people in the world just with machine learning. There are just tremendous possibilities there, and so I think we are going to be very focused on machine learning and it’s going to matter a lot. It’s going to make people’s lives better, and it’s going to really provide a lot of commercial opportunities for companies like Microsoft. But that doesn’t mean that commonsense reasoning isn’t crucial, isn’t really important. Almost any kind of task that you might want help with—even simple things like making travel arrangements, shopping, or bigger issues like getting medical advice, advice about your own education—these things almost always involve some elements of what you would call commonsense reasoning, making inferences that somehow are not common, that are very particular and specific to you, and maybe haven’t been seen before in exactly that way.
Now, having said that, in the scientific community, in our research and amongst our researchers, there’s a lot of debate about how much of that kind of reasoning capability could be captured through machine learning, and how much of it could be captured simply by observing what people do for long enough and then just learning from it. But, for me at least, I see what is likely is that there’s a different kind of science that we’ll need to really develop much further if we want to capture that kind of commonsense reasoning.
Just to give you a sense of the debate, one thing that we’ve been doing—it’s been an experiment ongoing in China—is we have a new kind of chatbot technology in China that takes the form of a person named Xiaolce. Xiaolce is a persona that lives on social media in China, and actually has a large number of followers, tens of millions of followers.
Typically, when we think about chatbots and intelligent agents here in the US market—things like Cortana, or Siri, or Google Assistant, or Alexa—we put a lot of emphasis on semantic understanding; we really want the chatbot to understand what you’re saying at the semantic level. For Xiaolce, we ran a different experiment, and instead of trying to put in that level of semantic understanding, we instead looked at what people say on social media, and we used natural language processing to pick out statement response pairs, and templatize them, and put them in a large database. And so now, if you say something to Xiaolce in China, Xiaolce looks at what other people say in response to an utterance like that. Maybe it’ll come up with a hundred likely responses based on what other people have done, and then we use machine learning to rank order those likely responses, trying to optimize the enjoyment and engagement in the conversation, optimize the likelihood that the human being who is engaged in the conversation will stick with a conversation. Over time, Xiaolce has become extremely effective at doing that. In fact, for the top, say, twenty million people who interact with Xiaolce on a daily basis, the conversations are taking more than twenty-three turns.
What’s remarkable about that—and fuels the debate about what’s important in AI and what’s important in intelligence—is that at least the core of Xiaolce really doesn’t have any understanding at all about what you’re talking about. In a way, it’s just very intelligently mimicking what other people do in successful conversations. It raises the question, when we’re talking about machines and machines that at least appear to be intelligent, what’s really important? Is it really a purely mechanical, syntactic system, like we’re experimenting with Xiaolce, or is it something where we want to codify and encode our semantic understanding of the world and the way it works, the way we’re doing, say, with Cortana.
These are fundamental debates in AI. What’s sort of cool, at least in my day-to-day work here at Microsoft, is we are in a position where we’re able, and allowed, to do fundamental research in these things, but also build and deploy very large experiments just to see what happens and to try to learn from that. It’s pretty cool. At the same time, I can’t say that leaves me with clear answers yet. Not yet. It just leaves me with great experiences and we’re sharing what we’re learning with the world but it’s much, much harder to then say, definitively, what these things mean.
You know, it’s true. In 1950 Alan Turing said, “Can a machine think?” And that’s still a question that many can’t agree on because they don’t necessarily agree on the terms. But you’re right, that chatbot could pass the Turing Test, in theory. At twenty-three turns, if you didn’t tell somebody it was a chatbot, maybe it would pass it. 
But you’re right that that’s somehow unsatisfying that this is somehow this big milestone. Because if you saw it as a user in slow motion—that you ask a question, and then it did a query, and then it pulled back a hundred things and it rank ordered them, and looked for how many of those had successful follow-ups, and thumbs up, and smiley faces, and then it gave you one… It’s that whole thing about, once you know how the magic trick works, it isn’t nearly as interesting. 
It’s true. And with respect to achieving goals, or completing tasks in the world with the help of the Xiaolce chatbot, well, in some cases it’s pretty amazing how helpful Xiaolce is to people. If someone says, “I’m in the market for a new smartphone, I’m looking for a larger phablet, but I still want it to fit in my purse,” Xiaolce is amazingly effective at giving you a great answer to that question, because it’s something that a lot of people talk about when they’re shopping for a new phone.
At the same time, Xiaolce might not be so good at helping you decide which hotels to stay in, or helping you arrange your next vacation. It might provide some guidance, but maybe not exactly the right guidance that’s been well thought out. One more thing to say about this is, today—at least at the scale and practicality that we’re talking about—for the most part, we’re learning from data, and that data is essentially the digital exhaust from human thought and activity. There’s also another sense in which Xiaolce, while it passes the Turing Test, it’s also, in some ways, limited by human intelligence, because almost everything it’s able to do is observed and learned from what other people have done. We can’t discount the possibility of future systems which are less data dependent, and are able to just understand the structure of the world, and the problems, and learn from that.
Right. I guess Xiaolce wouldn’t know the difference, “What’s bigger, a nickel or the sun?” 
That’s right, yes.
Unless the transcript of this very conversation were somehow part of the training set, but you notice, I’ve never answered it. I’ve never given the answer away, so, it still wouldn’t know. 
We should try the experiment at some point.
Why do you think we personify these AIs? You know about Weizenbaum and ELIZA and all of that, I assume. He got deeply disturbed when people were relating to a lie, knowing it was a chatbot. He got deeply concerned that people poured out their heart to it, and he said that when the machine says, “I understand,” it’s just a lie. That there’s no “I,” and there’s nothing that “understands” anything. Do you think that somehow confuses relationships with people and that there are unintended consequences to the personification of these technologies that we don’t necessarily know about yet? 
I’m always internally scolding myself for falling into this tendency to anthropomorphize our machine learning and AI systems, but I’m not alone. Even the most hardened, grounded researcher and scientist does this. I think this is something that is really at the heart of what it means to be human. The fundamental fascination that we have and drive to propagate our species is surfaced as a fascination with building autonomous intelligent beings. It’s not just AI, but it goes back to the Frankenstein kinds of stories that have just come up in different guises, and different forms throughout, really, all of human history.
I think we just have a tremendous drive to build machines, or other objects and beings, that somehow capture and codify, and therefore promulgate, what it means to be human. And nothing defines that more for us than some sort of codification of human intelligence, and especially human intelligence that is able to be autonomous, make its own decisions, make its own choices moving forward. It’s just something that is so primal in all of us. Even in AI research, where we really try to train ourselves and be disciplined about not making too many unfounded connections to biological systems, we fall into the language of biological intelligence all the time. Even the four categories I mentioned at the outset of our conversation—perception, learning, reasoning, language—these are pretty biologically inspired words. I just think it’s a very deep part of human nature.
That could well be the case. I have a book coming out on AI in April of 2018 that talks about these questions, and there’s a whole chapter about how long we’ve been doing this. And you’re right, it goes back to the Greeks, and the eagle that allegedly plucked out Prometheus’ liver every day, in some accounts, was a robot. There’s just tons of them. The difference of course, now, is that, up until a few years ago, it was all fiction, and so these were just stories. And we don’t necessarily want to build everything that we can imagine in fiction. I still wrestle with it, that, somehow, we are going to convolute humans and machines in a way which might be to the detriment of humans, and not to the ennobling of the machine, but time will tell. 
Every technology, as we discussed earlier, is double-edged. Just to strike an optimistic note here—to your last comment, which is, I think, very important—I do think that this is an area where people are really thinking hard about the kinds of issues you just raised. I think that’s in contrast to what was happening in computer science and the tech industry even just a decade ago, where there’s more or less an ethos of, “Technology is good and more technology is better.” I think now there’s much more enlightenment about this. I think we can’t impede the progress of science and technology development, but what is so good and so important is that, at least as a society, we’re really trying to be thoughtful about both the potential for good, as well as the potential for bad that comes out of all of this. I think that gives us a much better chance that we’ll get more of the good.
I would agree. I think the only other corollary to this, where there’s been so much philosophical discussion about the implications of the technology, is the harnessing of the atom. If you read the contemporary literature written at the time, people were like, “It could be energy too cheap to meter, or it could be weapons of colossal destruction, or it could be both.” There was a precedent there for a long and thoughtful discussion about the implications of the technology. 
It’s funny you mentioned that because that reminds me of another favorite quote of mine which is from Albert Einstein, and I’m sure you’re familiar with it. “The difference between stupidity and genius is that genius has its limits.”
That’s good. 
And of course, he said that at the same time that a lot of this was developing. It was a pithy way to tell the scientific community, and the world, that we need to be thoughtful and careful. And I think we’re doing that today. I think that’s emerging very much so in the field of AI.
There’s a lot of practical concern about the effect of automation on employment, and these technologies on the planet. Do you have an opinion on how that’s all going to unfold? 
Well, for sure, I think it’s very likely that there’s going to be massive disruptions in how the world works. I mentioned the printing press, the Gutenberg press, movable type; there was incredible disruption there. When you have nine doublings in the spread of books and printing presses in the period of fifty years, that’s a real medieval Moore’s Law. And if you think about the disruptive effect of that, by the early 1500s, the whole notion of what it meant to educate your children suddenly involved making sure that they could read and write. That’s a skill that takes a lot of expense, and years of formal training and it has this sort of destructive impact. So, while the overall impact on the world and society was hugely positive—really the printing press laid the foundation for the Age of Enlightenment and the Renaissance—it had an absolutely disruptive effect on what it meant and what it took for people to succeed in the world.
AI, I’m pretty sure, is going to have the same kind of disruptive effect, because it has the same sort of democratizing force that the spread of books has had. And so, for us, we’ve been trying very hard to keep the focus on, “What can we do to put AI in the hands of people, that really empowers them, and augments what they’re able to do? What are the codifications of AI technologies that enable people to be more successful in whatever they’re pursuing in life?” And that focus, that intent by our research labs and by our company, I think, is incredibly important, because it takes a lot of the inventive and innovative genius that we have access to, and tries to point it in the right direction.
Talk to me about some of the interesting work you’re doing right now. Start with the healthcare stuff, what can you tell us about that?
Healthcare is just incredibly interesting. I think there are maybe three areas that just really get me excited. One is just fundamental life sciences, where we’re seeing some amazing opportunities and insights being unlocked through the use of machine learning, large-scale machine, and data analytics—the data that’s being produced increasingly cheaply through, say, gene sequencing, and through our ability to measure signals in the brain. What’s interesting about these things is that, over and over again, in other areas, if you put great innovative research minds and machine learning experts together with data and computing infrastructure, you get this burst of unplanned and unexpected innovations. Right now, in healthcare, we’re just getting to the point where we’re able to arrange the world in such a way that we’re able to get really interesting health data into the hands of these innovators, and genomics is one area that’s super interesting there.
Then, there is the basic question of, “What happens in the day-to-day lives of doctors and nurses?” Today, doctors are spending an average—there are several recent studies about this—of one hundred and eight minutes a day just entering health data into electronic health record systems. This is an incredible burden on those doctors, though it’s very important because it’s managed to digitize people’s health histories. But we’re now seeing an amazing ability for intelligent machines to just watch and listen to the conversation that goes on between the doctor and the patient, and to dramatically reduce the burden of all of that record keeping on doctors. So, doctors can stop being clerks and record keepers, and instead actually start to engage more personally with their patients.
And then the third area which I’m very excited about, but maybe is a little more geeky, is determining how we can create a system, how can we create a cloud, where more data is open to more innovators, where great researchers at universities, great innovators at startups who really want to make a difference in health, can provide a platform and a cloud where we can supply them with access to lots of valuable data, so they can innovate, they can create models that do amazing things.
Those three things just all really get me excited because the combination of these things I think can really make the lives of doctors, and nurses, and other clinicians better; can really lead to new diagnostics and therapeutic technologies, and unleash the potential of great minds and innovators. Stepping back for a minute, it really just amounts to creating systems that allow innovators, data, and computing infrastructure to all come together in one place, and then just having the faith that when you do that, great things will happen. Healthcare is just a huge opportunity area for doing this, that I’ve just become really passionate about.
I guess we will reach a point where you can have essentially the very best doctor in the world in your smartphone, and the very best psychologist, and the very best physical therapist, and the very best everything, right? All available at essentially no cost. I guess the internet always provided, at some abstract level, all of that information if you had an infinite amount of time and patience to find it. And the promise of AI, the kinds of things you’re doing, is that it was that difference, what did you say, between learning and reasoning, that it kind of bridges that gap. So, paint me a picture of what you think, just in the healthcare arena, the world of tomorrow will look like. What’s the thing that gets you excited? 
I don’t actually see healthcare ever getting away from being an essentially human-to-human activity. That’s something very important. In fact, I predict that healthcare will still be largely a local activity where it’s something that you will fundamentally access from another person in your locality. There are lots of reasons for this, but there’s something so personal about healthcare that it ends up being based in relationships. I see AI in the future relieving senseless and mundane burden from the heroes in healthcare—the doctors, and nurses, and administrators, and so on—that provide that personal service.
So, for example, we’ve been experimenting with a number of healthcare organizations with our chatbot technology. That chatbot technology is able to answer—on demand, through a conversation with a patient—routine and mundane questions about some health issue that comes up. It can do a, kind of, mundane textbook triage, and then, once all that is done, make an intelligent connection to a local healthcare provider, summarize very efficiently for the healthcare provider what’s going on, and then really allow the full creative potential and attention of the healthcare provider to be put to good use.
Another thing that we’ll be showing off to the world at a major radiology conference next week is the use of computer vision and machine learning to learn the habits and tricks of the trade for radiologists, that are doing radiation therapy planning. Right now, radiation therapy planning involves, kind of, a pixel by pixel clicking on radiological images that is extremely important; it has to be done precisely, but also has some artistry. Every good radiologist has his or her different kinds of approaches to this. So, one nice thing about the machine learning basic computer vision today, is that you can actually observe and learn what radiologists do, their practices, and then dramatically accelerate and relieve a lot of the mundane efforts, so that instead of two hours of work that is largely mundane with only maybe fifteen minutes of that being very creative, we can automate the noncreative aspects of this, and allow the radiologists to devote that full fifteen minutes, or even half an hour to really thinking through the creative aspects of radiology. So, it’s more of an empowerment model rather than replacing those healthcare workers. It still relies on human intuition; it still relies on human creativity, but hopefully allows more of that intuition, and more of that creativity to be harnessed by taking away some of the mundane, and time-consuming aspects of things.
These are approaches that I view as very human-focused, very humane ways to, not just make healthcare workers more productive, but to make them happier and more satisfied in what they do every day. Unlocking that with AI is just something that I feel is incredibly important. And it’s not just us here at Microsoft that are thinking this way, I’m seeing some really enlightened work going on, especially with some of our academic collaborators in this way. I find it truly inspiring to see what might be possible. Basically, I’m pushing back on the idea that we’ll be able to replace doctors, replace nurses. I don’t think that’s the world that we want, and I don’t even know that that’s the right idea. I don’t think that that necessarily leads to better healthcare.
To be clear, I’m talking about the great, immense parts of the world where there aren’t enough doctors for people, where there is this vast shortage of medical professionals, to somehow fill that gap, surely the technology can do that.
Yes. I think access is great. Even with some of the health chatbot pilot deployments that we’ve been experimenting with right now, you can just see that potential. If people are living in parts of the world where they have access issues, it’s an amazing and empowering thing to be able to just send a message to chatbot that’s always available and ready to listen, and answer questions. Those sorts of things, for sure, can make a big difference. At the same time, the real payoff is when technologies like that then enable healthcare workers—really great doctors, really great clinicians—to clear enough on their plate that their creative potential becomes available to more people; and so, you win on both ends. You win both on an instant access through automation, but you can also have a potential to win by expanding and enhancing the throughput and the number of patients that the clinics and clinicians can deal with. It’s a win-win situation in that respect.
Well said and I agree. It sounds like overall you are bullish on the future, you’re optimistic about the future and you think this technology overall is a force for great good, or am I just projecting that on to you? 
I’d say we think a lot about this. I would say, in my own career, I’ve had to confront both the good and bad outcomes, both the positive and unintended consequences of technology. I remember when I was back at DARPA—I arrived at DARPA in 2009—and in the summer of 2009, there was an election in Iran where the people in Iran felt that the results were not valid. This sparked what has been called the Iranian Twitter revolution. And what was interesting about the Iranian Twitter revolution is that people were using social media, Friendster and Twitter, in order to protest the results of this election and to organize protests.
This came to my attention at DARPA, through the State Department, because it became apparent that US-developed technologies to detect cyber intrusions and to help protect corporate networks were being used by the Iranian regime to hunt down and prosecute people who were using social media to organize these protests. The US took very quick steps to stop the sale of these technologies. But the thing that’s important is that these technologies, I’m pretty sure, were developed with only the best of intentions in mind—to help make computer networks safer. So, the idea that these technologies could be used to suppress free speech and freedom of assembly was, I’m sure never contemplated.
This really, kind of, highlights the double-edged nature of technology. So, for sure, we try to bring that thoughtfulness into every single research project we have across Microsoft Research, and that motivates our participation in things like the partnership on AI that involves a large number of industry and academic players, because we always want to have the technology, industry, and the research world be more and more thoughtful and enlightened on these ideas. So, yes, we’re optimistic. I’m optimistic certainly about the future, but that optimism, I think, is founded on a good dose of reality that if we don’t actually take proactive steps to be enlightened, on both the good and bad possibilities, good and bad outcomes, then the good things don’t just happen on their own automatically. So, it’s something that we work at, I guess, is the bottom line for what I’m trying to say. It’s earned optimism.
I like that. “Earned optimism,” I like that. It looks like we are out of time. I want to thank you for an hour of fascinating conversation about all of these topics. 
It was really fascinating, and you’ve asked some of the hardest question of the day. It was a challenge, and tons of fun to noodle on them with you.
Like, “What is bigger, the sun or a nickel?” Turns out that’s a very hard question.
I’m going to ask Xiaolce that question and I’ll let you know what she says.
All right. Thank you again.
Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 26: A Conversation with Peter Lee syndicated from http://ift.tt/2wBRU5Z
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 24: A Conversation with Deep Varma
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Deep talk about the nervous system, AGI, the Turing Test, Watson, Alexa, security, and privacy.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 24: A Conversation with Deep Varma”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(00-55-19)-deep-varma.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-1.jpeg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Deep Varma, he is the VP of Data Engineering and Science over at Trulia. He holds a Bachelor’s of Science in Computer Science. He has a Master’s degree in Management Information Systems, and he even has an MBA from Berkeley to top all of that off. Welcome to the show, Deep.
Deep Varma: Thank you. Thanks, Byron, for having me here.
I’d like to start with my Rorschach test question, which is, what is artificial intelligence?
Awesome. Yeah, so as I define artificial intelligence, this is an intelligence created by machines based on human wisdom, to augment a human’s lifestyle to help them make the smarter choices. So that’s how I define artificial intelligence in a very simple and the layman terms.
But you just kind of used the word, “smart” and “intelligent” in the definition. What actually is intelligence?
Yeah, I think the intelligence part, what we need to understand is, when you think about human beings, most of the time, they are making decisions, they are making choices. And AI, artificially, is helping us to make smarter choices and decisions.
A very clear-cut example, which sometimes what we don’t see, is, I still remember in the old days I used to have this conventional thermostat at my home, which turns on and off manually. Then, suddenly, here comes artificial intelligence, which gave us Nest. Now as soon as I put the Nest there, it’s an intelligence. It is sensing that someone is there in the home, or not, so there’s motion sensing. Then it is seeing what kind of temperature do I like during summer time, during winter time. And so, artificially, the software, which is the brain that we have put on this device, is doing this intelligence, and saying, “great, this is what I’m going to do.” So, in one way it augmented my lifestyle—rather than me making those decisions, it is helping me make the smart choices. So, that’s what I meant by this intelligence piece here.
Well, let me take a different tack, in what sense is it artificial? Is that Nest thermostat, is it actually intelligent, or is it just mimicking intelligence, or are those the same thing?
What we are doing is, we are putting some sensors there on those devices—think about the central nervous system, what human beings have, it is a small piece of a software which is embedded within that device, which is making decisions for you—so it is trying to mimic, it is trying to make some predictions based on some of the data it is collecting. So, in one way, if you step back, that’s what human beings are doing on a day-to-day basis. There is a piece of it where you can go with a hybrid approach. It is mimicking as well as trying to learn, also.
Do you think we learn a lot about artificial intelligence by studying how humans learn things? Is that the first step when you want to do computer vision or translation, do you start by saying, “Ok, how do I do it?” Or, do you start by saying, “Forget how a human does it, what would be the way a machine would do it?”
Yes, I think it is very tough to compare the two entities, because the way human brains, or the central nervous system, the speed that they process the data, machines are still not there at the same pace. So, I think the difference here is, when I grew up my parents started telling me, “Hey, this is Taj Mahal. The sky is blue,” and I started taking this data, and I started inferring and then I started passing this information to others.
It’s the same way with machines, the only difference here is that we are feeding information to machines. We are saying, “Computer vision: here is a photograph of a cat, here is a photograph of a cat, too,” and we keep on feeding this information—the same way we are feeding information to our brains—so the machines get trained. Then, over a period of time, when we show another image of a cat, we don’t need to say, “This is a cat, Machine.” The machine will say, “Oh, I found out that this is a cat.”
So, I think this is the difference between a machine and a human being, where, in the case of machine, we are feeding the information to them, in one form or another, using devices; but in the case of human beings, you have conscious learning, you have the physical aspects around you that affect how you’re learning. So that’s, I think, where we are with artificial intelligence, which is still in the infancy stage.
Humans are really good at transfer learning, right, like I can show you a picture of a miniature version of the Statue of Liberty, and then I can show you a bunch of photos and you can tell when it’s upside down, or half in water, or obscured by light and all that. We do that really well. 
How close are we to being able to feed computers a bunch of photos of cats, and the computer nails the cat thing, but then we only feed it three or four images of mice, and it takes all that stuff it knows about different cats, and it is able to figure out all about different mice?
So, is your question, do we think these machines are going to be at the same level as human beings at doing this?
No, I guess the question is, if we have to teach, “Here’s a cat, here’s a thimble, here’s ten thousand thimbles, here’s a pin cushion, here’s ten thousand more pin cushions…” If we have to do one thing at a time, we’re never going to get there. What we’ve got to do is, like, learn how to abstract up a level, and say, “Here’s a manatee,” and it should be able to spot a manatee in any situation.
Yeah, and I think this is where we start moving into the general intelligence area. This is where it is becoming a little interesting and challenging, because human beings falls under more of the general intelligence, and machines are still falling under the artificial intelligence framework.
And the example you were giving, I have two boys, and when my boys were young, I’d tell them, “Hey, this is milk,” and I’d show them milk two times and they knew, “Awesome, this is milk.” And here come the machines, and you keep feeding them the big data with the hope that they will learn and they will say, “This is basically a picture of a mouse or this is a picture of a cat.”
This is where, I think, this artificial general intelligence which is shaping up—that we are going to abstract a level up, and start conditioning—but I feel we haven’t cracked the code for one level down yet. So, I think it’s going to take us time to get to the next level, I believe, at this time.
Believe me, I understand that. It’s funny, when you chat with people who spend their days working on these problems, they’re worried about, “How am I going to solve this problem I have tomorrow?” They’re not as concerned about that. That being said, everybody kind of likes to think about an AGI. 
AI is, what, six decades old and we’ve been making progress, do you believe that that is something that is going to evolve into an AGI? Like, we’re on that path already, and we’re just one percent of the way there? Or, is an AGI is something completely different? It’s not just a better narrow AI, it’s not just a bunch of narrow AI’s bolted together, it’s a completely different thing. What do you say?
Yes, so what I will say, it is like in the software development of computer systems—we call this as an object, and then we do inheritance of a couple of objects, and the encapsulation of the objects. When you think about what is happening in artificial intelligence, there are companies, like Trulia, who are investing in building the computer vision for real estate. There are companies investing in building the computer vision for cars, and all those things. We are in this state where all these dysfunctional, disassociated investments in our system are happening, and there are pieces that are going to come out of that which will go towards AGI.
Where I tend to disagree, I believe AI is complimenting us and AGI is replicating us. And this is where I tend to believe that the day the AGI comes—that means it’s a singularity that they are reaching wisdom or the processing power of human beings—that, to me, seems like doomsday, right? Because that those machines are going to be smarter than us, and they will control us.
And the reason I believe that, and there is a scientific reason for my belief; it’s because we know that in the central nervous system the core tool is the neurons, and we know neurons carry two signals—chemical and electrical. Machines can carry the electrical signals, but the chemical signals are the ones which generate these sensory signals—you touch something, you feel it. And this is where I tend to believe that AGI is not going to happen, I’m close to confident. Thinking machines are going to come—IBM Watson, as an example—so that’s how I’m differentiating it at this time.
So, to be clear, you said you don’t believe we’ll ever make an AGI?
I will be the one on the extreme end, but I will say yes.
That’s fascinating. Why is that? The normal argument is a reductionist argument. It says, you are some number of trillions of cells that come together, and there’s an emergent “you” that comes out of that. And, hypothetically, if we made a synthetic copy of every one of those cells, and connected them, and did all that, there would be another Deep Varma. So where do you think the flaw in that logic is?
I think the flaw in that logic is that the general intelligence that humans have is also driven by the emotional side, and the emotional side—basically, I call it a chemical soup—is, I feel, the part of the DNA which is not going to be possible to replicate in these machines. These machines will learn by themselves—we recently saw what happened with Facebook, where Facebook machines were talking to each other and they start inventing their own language, over a period of time—but I believe the chemical mix of humans is what is next to impossible to produce it.
I mean—and I don’t want to take a stand because we have seen proven, over the decades, what people used to believe in the seventies has been proven to be right—I think the day we are able to find the chemical soup, it means we have found the Nirvana; and we have found out how human beings have been born and how they have been built over a period of time, and it took us, we all know, millions and millions of years to come to this stage. So that’s the part which is putting me on the other extreme end, to say, “Is there really going to another Deep Varma,” and if yes, then where is this emotional aspect, where are those things that are going to fit into the bigger picture which drives human beings onto the next level?
Well, I mean there’s a hundred questions rushing for the door right now. I’ll start with the first one. What do you think is the limit of what we’ll be able to do without the chemical part? So, for instance, let me ask a straight forward question—will we be able to build a machine that passes the Turing test?
Can we build that machine? I think, potentially, yes, we can.
So, you can carry on a conversation with it, and not be able to figure out that it’s a machine? So, in that case, it’s artificial intelligence in the sense that it really is artificial. It’s just running a program, saying some words, it’s running a program, saying some words, but there’s nobody home.
Yes, we have IBM Watson, which can go a level up as compared to Alexa. I think we will build machines which, behind the scenes, are trying to understand your intent and trying to have those conversations—like Alexa and Siri. And I believe they are going to eventually start becoming more like your virtual assistants, helping you make decisions, and complimenting you to make your lifestyle better. I think that’s definitely the direction we’re going to keep seeing investments going on.
I read a paper of yours where you made a passing reference to Westworld.
Right.
Putting aside the last several episodes, and what happened in them—I won’t give any spoilers—take just the first episode, do you think that we will be able to build machines that can interact with people like that?
I think, yes, we will.
But they won’t be truly creative and intelligent like we are?
That’s true.
Alright, fascinating. 
So, there seem to be these two very different camps about artificial intelligence. You have Elon Musk who says it’s an existential threat, you have Bill Gates who’s worried about it, you have Stephen Hawking who’s worried about it, and then there’s this other group of people that think that’s distracting. 
I saw that Elon Musk spoke at the governor’s convention and said something and then Pedro Domingos, who wrote The Master Algorithm, retweeted that article, and his whole tweet was, “One word: sigh.” So, there’s this whole other group of people that think that’s just really distracting, really not going to happen, and they’re really put off by that kind of talk. 
Why do you think there’s such a gap between those two groups of people?
The gap is that there is one camp who is very curious, and they believe that millions of years of how human beings evolved can immediately be taken by AGI, and the other camp is more concerned with controlling that, asking are those machines going to become smarter than us, are they going to control us, are we going to become their slaves?
And I think those two camps are the extremes. There is a fear of losing control, because humans—if you look into the food chain, human beings are the only ones in the food chain, as of now, who control everything—fear that if those machines get to our level of wisdom, or smarter than us, we are going to lose control. And that’s where I think those two camps are basically coming to the extreme ends and taking their stands.
Let’s switch gears a little bit. Aside from the robot uprising, there’s a lot of fear wrapped up in the kind of AI we already know how to build, and it’s related to automation. Just to set up the question for the listener, there’s generally three camps. One camp says we’re going to have all this narrow AI, and it’s going to put a bunch of people out of work, people with less skills, and they’re not going to be able to get new work and we’re going to have, kind of, the Great Depression going on forever. Then there’s a second group that says, no, no, it’s worse than that, computers can do anything a person can do, we’re all going to be replaced. And then there’s a third camp that says, that’s ridiculous, every time something comes along, like steam or electricity, people just take that technology, and use it to increase their own productivity, and that’s how progress happens. So, which of those three camps, or fourth one, perhaps, do you believe?
I fall into, mostly, the last camp, which is, we are going to increase the productivity of human beings; it means we will be able to deliver more and faster. A few months back, I was in Berkeley and we were having discussions around this same topic, about automation and how jobs are going to go away. The Obama administration even published a paper around this topic. One example which always comes in my mind is, last year I did a remodel of my house. And when I did the remodeling there were electrical wires, there are these water pipelines going inside my house and we had to replace them with copper pipelines, and I was thinking, can machines replace those job? I keep coming back to the answer that, those skill level jobs are going to be tougher and tougher to replace, but there are going to be productivity gains. Machines can help to cut those pipeline pieces much faster and in a much more accurate way. They can measure how much wire you’ll need to replace those things. So, I think those things are going to help us to make the smarter choices. I continue to believe it is going to be mostly the third camp, where machines will keep complementing us, helping to improve our lifestyles and to improve our productivity to make the smarter choices.
So, you would say that there are, in most jobs, there are elements that automation cannot replace, but it can augment, like a plumber, or so forth. What would you say to somebody who’s worried that they’re going to be unemployable in the future? What would you advise them to do?
Yeah, and the example I gave is a physical job, but think about an example of a business consultants, right? Companies hire business consultants to come, collect all the data, then prepare PowerPoints on what you should do, and what you should not do. I think those are the areas where artificial intelligence is going to come, and if you have tons of the data, then you don’t need a hundred consultants. For those people, I say go and start learning about what can be done to scale them to the next level. So, in the example I’ve just given, the business consultants, if they are doing an audit of a company with the financial books, look into the tools to help so that an audit that used to take thirty days now takes ten days. Improve how fast and how accurate you can make those predictions and assumptions using machines, so that those businesses can move on. So, I would tell them to start looking into, and partnering into, those areas early on, so that you are not caught by surprise when one day some industry comes and disrupts you, and you say, “Ouch, I never thought about it, and my job is no longer there.”
It sounds like you’re saying, figure out how to use more technology? That’s your best defense against it, is you just start using it to increase your own productivity.
Yeah.
Yeah, it’s interesting, because machine translation is getting comparable to a human, and yet generally people are bullish that we’re going to need more translators, because this is going to cause people to want to do more deals, and then they’re going to need to have contracts negotiated, and know about customs in other countries and all of that, so that actually being a translator you get more business out of this, not less, so do you think things like that are kind of the road map forward?
Yeah, that’s true.
So, what are some challenges with the technology? In Europe, there’s a movement—I think it’s already adopted in some places, but the EU is considering it—this idea that if an AI makes a decision about you, like do you get the loan, that you have the right to know why it made it. In other words, no black boxes. You have to have transparency and say it was made for this reason. Do you think a) that’s possible, and b) do you think it’s a good policy?
Yes, I definitely believe it’s possible, and it’s a good policy, because this is what consumers wants to know, right? In our real estate industry, if I’m trying to refinance my home, the appraiser is going to come, he will look into it, he will sit with me, then he will send me, “Deep, your house is worth $1.5 million dollar.” He will provide me the data that he used to come to that decision—he used the neighborhood information, he used the recent sold data.
And that, at the end of the day, gives confidence back to the consumer, and also it shows that this is not because this appraiser who came to my home didn’t like me for XYZ reason, and he end up giving me something wrong; so, I completely agree that we need to be transparent. We need to share why a decision has been made, and at the same time we should allow people to come and understand it better, and make those decisions better. So, I think those guidelines need to be put into place, because humans tend to be much more biased in their decision-making process, and the machines take the bias out, and bring more unbiased decision making.
Right, I guess the other side of that coin, though, is that you take a world of information about who defaulted on their loan, and then you take you every bit of information about, who paid their loan off, and you just pour it all in into some gigantic database, and then you mine it and you try to figure out, “How could I have spotted these people who didn’t pay their loan?” And then you come up with some conclusion that may or may not make any sense to a human, right? Isn’t that the case that it’s weighing hundreds of factors with various weights and, how do you tease out, “Oh it was this”? Life isn’t quite that simple, is it?
No, it is not, and demystifying this whole black box has never been simple. Trust us, we face those challenges in the real estate industry on a day-to-day basis—we have Trulia’s estimates—and it’s not easy. At the end, we just can’t rely totally on those algorithms to make the decisions for us.
I will give one simple example, of how this can go wrong. When we were training our computer vision system, and, you know, what we were doing was saying, “This is a window, this is a window.” Then the day came when we said, “Wow, our computer vision can say I will look at any image, and known this is a window.” And one fine day we got an image where there is a mirror, and there is a reflection of a window on the mirror, and our computer said, “Oh, Deep, this is a window.” So, this is where big data and small data come into a place, where small data can make all these predictions and goes wrong completely.
This is where—when you’re talking about all this data we are taking in to see who’s on default and who’s not on default—I think we need to abstract, and we need to at least make sure that with this aggregated data, this computational data, we know what the reference points are for them, what the references are that we’re checking, and make sure that we have the right checks and balances so that machines are not ultimately making all the calls for us.
You’re a positive guy. You’re like, “We’re not going to build an AGI, it’s not going to take over the world, people are going to be able to use narrow AI to grow their productivity, we’re not going to have unemployment.” So, what are some of the pitfalls, challenges, or potential problems with the technology?
I agree with you, it’s being positive. Realistically, looking into the data—and I’m not saying that I have the best data in front of me—I think what is the most important is we need to look into history, and we need to see how we evolved, and then the Internet came and what happened.
The challenge for us is going to be that there are businesses and groups who believe that artificial intelligence is something that they don’t have to worry about, and over a period of time artificial intelligence is going to start becoming more and more a part of business, and those who are not able to catch up with this, they’re going to see the unemployment rate increase. They’re going to see company losses increase because some of the decisions they’re not making in the right way.
You’re going to see companies, like Lehman Brothers, who are making all these data decisions for their clients by not using machines but relying on humans, and these big companies fail because of them. So, I think, that’s an area where we are going to see problems, and bankruptcies, and unemployment increases, because of they think that artificial intelligence is not for them or their business, that it’s never going to impact them—this is where I think we are going to get the most trouble.
The second area of trouble is going to be security and privacy, because all this data is now floating around us. We use the Internet. I use my credit card. Every month we hear about a new hack—Target being hacked, Citibank being hacked—all this data physically-stored in the system and it’s getting hacked. And now we’ll have all this data wirelessly transmitting, machines talking to each of their devices, IoT devices talking to each other—how are you we going to make sure that there is not a security threat? How are we going to make sure that no one is storing my data, and trying to make assumptions, and enter into my bank account? Those are the two areas where I feel we are going to see, in coming years, more and more challenges.
So, you said privacy and security are the two areas?
Denial of accepting AI is the one, and security and privacy is the second one—those are the two areas.
So, in the first one, are there any industries that don’t need to worry about it, or are you saying, “No, if you make bubble-gum you had better start using AI?”
I will say every industry. I think every industry needs to worry about it. Some industries may adapt the technologies faster, some may go slower, but I’m pretty confident that the shift is going to happen so fast that, those businesses will be blindsided—be it small businesses or mom and pop shops or big corporations, it’s going to touch everything.
Well with regard to security, if the threat is artificial intelligence, I guess it stands to reason that the remedy is AI as well, is that true?
The remedy is there, yes. We are seeing so many companies coming and saying, “Hey, we can help you see the DNS attacks. When you have hackers trying to attack your site, use our technology to predict that this IP address or this user agent is wrong.” And we see that to tackle the remedy, we are building an artificial intelligence.
But, this is where I think the battle between big data and small data is colliding, and companies are still struggling. Like, phishing, which is a big problem. There are so many companies who are trying to solve the phishing problem of the emails, but we have seen technologies not able to solve it. So, I think AI is a remedy, but if we stay just focused on the big data, that’s, I think, completely wrong, because my fear is, a small data set can completely destroy the predictions built by a big data set, and this is where those security threats can bring more of an issue to us.
Explain that last bit again, the small data set can destroy…?
So, I gave the example of computer vision, right? There was research we did in Berkeley where we trained machines to look at pictures of cats, and then suddenly we saw the computer start predicting, “Oh, this is this kind of a cat, this is cat one, cat two, this is a cat with white fur.” Then we took just one image where we put the overlay of a dog on the body of a cat, and the machines ended up predicting, “That’s a dog,” not seeing that it’s the body of a cat. So, all the big data that we used to train our computer vision, just collapsed with one photo of a dog. And this is where I feel that if we are emphasizing so much on using the big data set, big data set, big data set, are there smaller data sets which we also need to worry about to make sure that we are bridging the gap enough to making sure that our securities are not compromised?
Do you think that the system as a whole is brittle? Like, could there be an attack of such magnitude that it impacts the whole digital ecosystem, or are you worried more about, this company gets hacked and then that one gets hacked and they’re nuisances, but at least we can survive them?
No, I’m more worried about the holistic view. We saw recently, how those attacks on the UK hospital systems happened. We saw some attacks—which we are not talking about—on our power stations. I’m more concerned about those. Is there going to be a day when we have built massive infrastructures that are reliant on computers—our generation of power and the supply of power and telecommunications—and suddenly there is a whole outage which can take the world to a standstill, because there is a small hole which we never thought about. That, to me, is the bigger threat than the stand alone individual things which are happening now.
That’s a hard problem to solve, there’s a small hole on the internet that we’ve not thought about that can bring the whole thing down, that would be a tricky thing to find, wouldn’t it?
It is a tricky thing, and I think that’s what I’m trying to say, that most of the time we fail because of those smaller things. If I go back, Byron, and bring the artificial general intelligence back into a picture, as human beings it’s those small, small decisions we make—like, I make a fast decision when an animal is approaching very close to me, so close that my senses and my emotions are telling me I’m going to die—and this is where I think sometimes we tend to ignore those small data sets.
I was in a big debate around those self-driven cars which are shaping up around us, and people were asking me when will we see those self-driven cars on a San Francisco street. And I said, “I see people doing crazy jaywalking every day,” and accidents are happening with human drivers, no doubt, but the scale can increase so fast if those machines fail. If they have one simple sensor which is not working at that moment in time and not able to get one signal, it can kill human beings much faster as compared to what human beings are killing, so that’s the rational which I’m trying to put here.
So, one of my questions that I was going to ask you, is, do you think AI is a mania? Like it’s everywhere but it seems like, you’re a person who says every industry needs to adopt it, so if anything, you would say that we need more focus on it, not less, is that true?
That’s true.
There was a man in the ‘60s named Weizenbaum who made a program called ELIZA, which was a simple program that you would ask a question, say something like, “I’m having a bad day,” and then it would say, “Why are you having a bad day?” And then you would say, “I’m having a bad day because I had a fight with my spouse,” and then would ask, “Why did you have a fight?” And so, it’s really simple, but Weizenbaum got really concerned because he saw people pouring out their heart to it, even though they knew it was a program. It really disturbed him that people developed emotional attachment to ELIZA, and he said that when a computer says, “I understand,” that it’s a lie, that there’s no “I,” there’s nothing that understands anything. 
Do you worry that if we build machines that can imitate human emotions, maybe the care for people or whatever, that we will end up having an emotional attachment to them, or that that is in some way unhealthy?
You know, Byron, it’s a very great question. I think, also pick out a great example. So, I have Alexa at my home, right, and I have two boys, and when we are in a kitchen—because Alexa is in our kitchen—my older son comes home and says, “Alexa, what’s the temperature look like today?” Alexa says, “Temperature is this,” and then he says, “Okay, shut up,” to Alexa. My wife is standing there saying “Hey, don’t be rude, just say, ‘Alexa stop.’” You see that connection? The connection is you’ve already started treating this machine as a respectful device, right?
I think, yes, there is that emotional connection there, and that’s getting you used to seeing it as part of your life in an emotional connection. So, I think, yes, you’re right, that’s a danger.
But, more than Alexa and all those devices, I’m more concerned about the social media sites, which can have much more impact on our society than those devices. Because those devices are still physical in shape, and we know that if the Internet is down, then they’re not talking and all those things. I’m more concerned about these virtual things where people are getting more emotionally attached, “Oh, let me go and check what my friends been doing today, what movie they watched,” and how they’re trying to fill that emotional gap, but not meeting individuals, just seeing the photos to make them happy. But, yes, just to answer your question, I’m concerned about that emotional connection with the devices.
You know, it’s interesting, I know somebody who lives on a farm and he has young children, and, of course, he’s raising animals to slaughter, and he says the rule is you just never name them, because if you name them then that’s it, they become a pet. And, of course, Amazon chose to name Alexa, and give it a human voice; and that had to be a deliberate decision. And you just wonder, kind of, what all went into it. Interestingly, Google did not name theirs, it’s just the Google Assistant. 
How do you think that’s going to shake out? Are we just provincial, and the next generation isn’t going to think anything of it? What do you think will happen?
So, is your question what’s going to happen with all those devices and with all those AI’s and all those things?
Yes, yes.
As of now, those devices are all just operating in their own silo. There are too many silos happening. Like in my home, I have Alexa, I have a Nest, those plug-ins. I love, you know, where Alexa is talking to Nest, “Hey Nest, turn it off, turn it on.” I think what we are going to see over the next five years is that those devices are communicating with each other more, and sending signals, like, “Hey, I just saw that Deep left home, and the garage door is open, close the garage door.”
IoT is popping up pretty fast, and I think people are thinking about it, but they’re not so much worried about that connectivity yet. But I feel that where we are heading is more of the connectivity with those devices, which will help us, again, compliment and make the smart choices, and our reliance on those assistants is going to increase.
Another example here, I get up in the morning and the first thing I do is come to the kitchen and say Alexa, “Put on the music, Alexa, put on the music, Alexa, and what’s the weather going to look like?” With the reply, “Oh, Deep, San Francisco is going to be 75,” then Deep knows Deep is going to wear a t-shirt today. Here comes my coffee machine, my coffee machine has already learned that I want eight ounces of coffee, so it just makes it.
I think all those connections, “Oh, Deep just woke up, it is six in the morning, Deep is going to go to office because it’s a working day, Deep just came to kitchen, play this music, tell Deep that the temperature is this, make coffee for Deep,” this is where we are heading in next few years. All these movies that we used to watch where people were sitting there, and watching everything happen in the real time, that’s what I think the next five years is going to look like for us.
So, talk to me about Trulia, how do you deploy AI at your company? Both customer facing and internally?
That’s such an awesome question, because I’m so excited and passionate because this brings me home. So, I think in artificial intelligence, as you said, there are two aspects to it, one is for a consumer and one is internal, and I think for us AI helps us to better understand what our consumers are looking for in a home. How can we help move them faster in their search—that’s the consumer facing tagline. And an example is, “Byron is looking at two bedroom, two bath houses in a quiet neighborhood, in good school district,” and basically using artificial intelligence, we can surface things in much faster ways so that you don’t have to spend five hours surfing. That’s more consumer facing.
Now when it comes to the internal facing, internal facing is what I call “data-driven decision making.” We launch a product, right? How do we see the usage of our product? How do we predict whether this usage is going to scale? Are consumers going to like this? Should we invest more in this product feature? That’s the internal facing we are using artificial intelligence.
I don’t know if you have read some of my blogs, but I call it data-driven companies—there are two aspects of the data driven, one is the data-driven decision making, this is more of an analyst, and that’s the internal reference to your point, and the external is to the consumer-facing data-driven product company, which focuses on how do we understand the unique criteria and unique intent of you as a buyer—and that’s how we use artificial intelligence in the spectrum of Trulia.
When you say, “Let’s try to solve this problem with data,” is it speculative, like do you swing for the fences and miss a lot? Or, do you look for easy incremental wins? Or, are you doing anything that would look like pure science, like, “Let’s just experiment and see what happens with this”? Is the science so nascent that you, kind of, just have to get in there and start poking around and see what you can do?
I think it’s both. The science helps you understand those patterns much faster and better and in a much more accurate way, that’s how science helps you. And then, basically, there’s trial and error, or what we call an, “A/B testing” framework, which helps you to validate whether what science is telling you is working or not. I’m happy to share an example with you here if you want.
Yeah, absolutely.
So, the example here is, we have invested in our computer vision which is, we train our machines and our machines basically say, “Hey, this is a photo of a bathroom, this is a photo of a kitchen,” and we even have trained that they can say, “This is a kitchen with a wide granite counter-top.” Now we have built this massive database. When a consumer comes to the Trulia site, what they do is share their intent, they say, “I want two bedrooms in Noe Valley,” and the first thing that they do when those listings show up is click on the images, because they want to see what that house looks like.
What we saw was that there were times when those images were blurred, there were times when those images did not match up with the intent of a consumer. So, what we did with our computer vision, we invested in something called “the most attractive image,” which basically takes the three attributes—it looks into the quality of an image, it looks into the appropriateness of an image, and it looks into the relevancy of an image—and based on these three things we use our conventional neural network models to rank the images and we say, “Great, this is the best image.” So now when a consumer comes and looks at that listing we show the most attractive photo first. And that way, the consumer gets more engaged with that listing. And what we have seen— using the science, which is machine learning, deep learning, CNM models, and doing the A/B testing—is that this project increased our enquiries for the listing by double digits, so that’s one of the examples which I just want to share with you.
That’s fantastic. What is your next challenge? If you could wave a magic wand, what would be the thing you would love to be able to do that, maybe, you don’t have the tools or data to do yet?
I think, what we haven’t talked about here and I will use just a minute to tell you, that what we have done is we’ve built this amazing personalization platform, which is capturing Byron’s unique preferences and search criteria, we have built machine learning systems like computer vision recommender systems and the user engagement prediction model, and I think our next challenge will be to keep optimizing the consumer intent, right? Because the biggest thing that we want to understand is, “What exactly is Byron looking into?” So, if Byron visits a particular neighborhood because he’s travelling to Phoenix, Arizona, does that mean you want to buy a home there, or Byron is in San Francisco and you live here in San Francisco, how do we understand?
So, we need to keep optimizing that personalization platform—I won’t call it a challenge because we have already built this, but it is the optimization—and make sure that our consumers get what they’re searching for, keep surfacing the relevant data to them in a timely manner. I think we are not there yet, but we have made major inroads into our big data and machine learning technologies. One specific example, is Deep, basically, is looking into Noe Valley or San Francisco, and email and push notifications are the two channels, for us, where we know that Deep is going to consume the content. Now, the day we learn that Deep is not interested in Noe Valley, we stop sending those things to Deep that day, because we don’t want our consumers to be overwhelmed in their journey. So, I think that this is where we are going to keep optimizing on our consumer’s intent, and we’ll keep giving them the right content.
Alright, well that is fantastic, you write on these topics so, if people want to keep up with you Deep how can they follow you?
So, when you said “people” it’s other businesses and all those things, right? That’s what you mean?
Well I was just referring to your blog like I was reading some of your posts.
Yeah, so we have our tech blog, http://ift.tt/2AM5zMS, and it’s not only me; I have an amazing team of engineers—those who are way smarter than me to be very candid—my data scientist team, and all those things. So, we write our blogs there, so I definitely ask people to follow us on those blogs. When I go and speak at conferences, we publish that on our tech blog, and I publish things on my LinkedIn profile. So, yeah, those are the channels which people can follow. Trulia, we also host data science meetups here in Trulia, San Francisco on the seventh floor of our building, that’s another way people can come, and join, and learn from us.
Alright, well I want to thank you for a fascinating hour of conversation, Deep.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 24: A Conversation with Deep Varma syndicated from http://ift.tt/2wBRU5Z
0 notes