Tumgik
#it’s like ai trying to make an img of you but it can’t tell what you look like from an angle so it creates something wrong looking . or like
nyamcot · 1 year
Text
Not a day where I ithink ab the silliness of meowware because Imagine fooling around in puyo puyo and then the puyo start not having eyes or having too many and now all the characters are being cliché creepy pasta tropes [it’s messing w you by fucking up your game]
2 notes · View notes
aion-rsa · 3 years
Text
Mass Effect: Best Star Trek References and Easter Eggs
https://ift.tt/3tH2zJa
Look, it’s not exactly a secret that Mass Effect has a little Star Trek in its DNA. It’s a franchise all about assembling a crew comprised of humans and aliens as you explore the furthest reaches of space and try your best to romance a few of those humans and aliens. It’s safe to say someone on the Mass Effect development teams watched an episode or two of Star Trek.
So while Mass Effect is, in some ways, a giant tribute to Star Trek and several other notable sci-fi works, there are a few ways that the Mass Effect games reference Star Trek that you may not have spotted unless you’re a hardcore Star Trek fan who also explored the furthest reaches of Mass Effect‘s galaxy.
From suspicious lines of dialog to familiar voices, these are some of the best Star Trek references and Easter eggs you’ll find in the Mass Effect trilogy.
cnx.cmd.push(function() { cnx({ playerId: "106e33c0-3911-473c-b599-b1426db57530", }).render("0270c398a82f44f49c23c16122516796"); });
The Borg and The Geth
As a race of networked AI who utilize a “hivemind” system and have to deal with the occasional dissenter, there are clearly similarities between Mass Effect‘s Geth and Star Trek‘s Borg that can’t be ignored.
Having said that, some fans have pointed out that the designs and philosophies of the Geth could also be a nod to Battlestar Galactica‘s Cylons. It should also be noted that Mass Effect‘s Reapers are often treated as a mysterious galactic threat similar to how the Borg were described in early TNG episodes.
The Thorian and Star Trek 2: The Wrath of Khan’s Ceti Eels
In Mass Effect, you’ll encounter a sentient plant known as a Thorian. If this almost slug-like creature with the ability to use painful spores to control people’s minds seems oddly familiar, that’s likely because it’s almost certainly a reference to the Ceti Eels that Khan used to control people in one of Star Trek 2‘s most memorable scenes.
In fact, there’s a memorable moment in Mass Effect when Fai Dan shoots himself after ignoring a Thorian order to kill Shepard. It’s an almost exact recreation of a Wrath of Khan scene in which Captain Terrel uses a phaser on himself after disobeying Khan and the influence of the Ceti Eels.
Cerebus and Section 31
In Star Trek: Deep Space 9, we learn there’s a special section of Starfleet known simply as Section 31. They’re kind of a “wetworks” organization that has operated with and without Star Fleet’s support over the years. Through it all, they claim to promote “security” through whatever means necessary.
The Cerebus group in Mass Effect serve a similar purpose, with the biggest difference being that Cerebus has long been a kind of “splinter” group that operates independently to protect human interest (allegedly) on a galactic scale whereas Section 31 did seemingly operate with Starfleet’s support (at least for a time).
The Normandy’s Poker Table
While it’s a bit of a shame you don’t really get to do much with the poker table on the Normandy, the fact there’s a poker table so prominently featured on a spaceship has to be a callback to the poker table frequently used by the Enterprise crew in Star Trek: The Next Generation.
Actually, TNG‘s poker table was such an important part of the ship (at least to key members of the crew) that it was even the centerpiece of the final scene in TNG‘s last episode, “All Good Things…”
Kenneth Donnelly is (Accidentally?) a Scotty Tribute
As a spaceship engineer with a heavy Scottish accent, it’s easy to assume that Mass Effect‘s Kenneth Donnelly was designed to be an obvious homage to Star Trek‘s Montgomery “Scotty” Scott.
However, Mass Effect level designer Dusty Everman has previously stated that the similarities between those two weren’t planned from the start and really only came to life as the result of voice actor John Ullyatt’s performance choices and a bit of coincidence. Actually, Everman (or someone convincingly posing as him once upon a time) stated that Donnelly’s accent was based on his wife’s love of Ewan McGregor and that the original plan was for female Shepard players to be able to romance him.
“Yes! Exhilarating, Isn’t It?”
One of Mass Effect‘s better Star Trek references happens when Shepard warns a Krogan that the area around them is collapsing and the Krogan replies “Exhilarating, isn’t it?”
The same line is spoken by Christopher Lloyd in Star Trek 3: The Search for Spock under spiritually similar circumstances. Lloyd even portrays a Klingon in the film, and the Krogan have been called a Klingon-like race.
Various Star Trek Actors Voice Characters in the Mass Effect Franchise
If you’ve ever wondered just how much Star Trek influenced Mass Effect, look no further than Mass Effect‘s voice actor cast list.
Marina Sirtis, Armin Shimerman, Keith Szarabajka, Dwight Schultz…the Mass Effect cast is packed with actors arguably best known for their roles in various Star Trek series and films. Michael Dorn (who famously portrayed Worf in Star Trek: TNG) even voices a Krogan in Mass Effect 2.
Read more
Games
Mass Effect’s Hidden Kirk/Picard Morality System
By Matthew Byrd
Games
Star Trek: Judgment Rites Was the Final Season The Original Series Deserved
By Matthew Byrd
“This is… it’s green?”
While visiting the Dark Star lounge, Mass Effect‘s Commander Shepard receives an alien drink and remarks “This is… it’s green?” The line is a clear callback to a Star Trek: TOS episode called “By Any Other Name” in which Scotty picks up a strange bottle and makes the same comment.
In fact, Data says a similar line in the TNG episode “Relics” while pouring a mysterious green drink for…Scotty.
Mordin Solus and Data Have Similar Taste in Music
Mordin Solus’ love of music isn’t just one of the best Mass Effect companion’s most loveable attributes, it’s an apparent nod to Data: the also hyper-intelligent, also slightly detached Star Trek: TNG character who also loves to sing.
Actually, Solus and Data seem to share an appreciation for Gilbert and Sullivan as the two sing the duo’s greatest hits in their respective series.
“Goodbye Little Wing” and Deanna Troi
Matriarch Benezia isn’t just one of the more memorable side characters in the original Mass Effect; she’s another one of those characters in the Mass Effect franchise you may have not realized was voiced by a Star Trek alumni. Yes, Benezia is played by none other than Deanna Troi actress Marina Sirtis.
Best of all, there’s a moment in the first Mass Effect when Benezia says “Goodbye little wing, I have always been proud of you” shortly before dying. It’s an odd phrase that might make a little more sense when you realize that Troi’s mother was always calling her “little one” in TNG.
“When Your World Seems Hollow, We Help You Touch the Sky”
This one has to be in the running for the honor of “most obscure” Star Trek reference in any Mass Effect game.
In Mass Effect‘s Bring Down The Sky DLC, there is a radio shack located between two fusion torches. Go inside it, and you’ll find a log filled with unused radio promo spots. The script for one of those spots reads “If you are feeling hollow, we can help you touch the sky.” What is that supposed to mean?
Well, it seems to be a nod to a Star Trek: TOS Season 3 episode called “For The World Is Hollow And I Have Touched The Sky.” In that episode, an old man living atop a mountain tells the Enterprise crew “the world is hollow and I have touched the sky.”
The Systems Alliance Logo Looks Very Familiar…
Mass Effect‘s Systems Alliance is an Earth coalition responsible for representing the interests of humans in Citadel space. There are obviously many organizations in several notable sci-fi works with similar responsibilities, but there’s little doubt that the Systems Alliance is intended to refer to Star Trek‘s Starfleet.
In fact, the Systems Alliance logo bears a strong resemblance to the Starfleet logo from later Star Trek series and films. It’s not exactly a 1:1 copy, but it’s impossible not to spot the similarities once you start looking for them.
“Karora is Essentially a Great Rock in Space”
You’ll find another surprisingly subtle Star Trek reference in Mass Effect 2 when you request more information on a planet named Karora. The Normandy’s computer will inform you that “Karora is essentially a great rock in space, tidally locked to Amada.”
As it just so happens, Spock describes the Regula planet that the Enterprise crew encounters in Star Trek 2 as “essentially a great rock in space.” Maybe the wording is common enough to be a coincidence, but given all the other clear Star Trek references in Mass Effect, it feels like an intentional tribute.
The post Mass Effect: Best Star Trek References and Easter Eggs appeared first on Den of Geek.
from Den of Geek https://ift.tt/3uUDPhW
4 notes · View notes
suzanneshannon · 5 years
Text
CSI: The case of the missing WAV audio files on the FAT32 SD Card
Buckle up kids, as this is a tale. As you may know, I have a lovely podcast at https://hanselminutes.com. You should listen.
Recently through an number of super cool random events I got the opportunity to interview actor Chris Conner who plays Poe on Altered Carbon. I'm a big fan of the show but especially Chris. You should watch the show because Poe is a joy and Chris owns every scene, and that's with a VERY strong cast.
I usually do my interviews remotely for the podcast but I wanted to meet Chris and hang out in person so I used my local podcasting rig which consists of a Zoom H6 recorder.
I have two Shure XLR mics, a Mic stand, and the Zoom. The Zoom H6 is a very well though of workhorse and I've used it many times before when recording shows. It's not rocket surgery but one should always test their things.
I didn't want to take any chances to I picked up a 5 pack of 32GIG high quality SD Cards. I put a new one in the Zoom, the Zoom immediately recognized the SD Card so I did a local recording right there and played it back. Sounds good. I played it back locally on the Zoom and I could hear the recording from the Zoom's local speaker. It's recording the file in stereo, one side for each mic. Remember this for later.
I went early to the meet and set up the whole recording setup. I hooked up a local monitor and tested again. Records and plays back locally. Cool. Chris shows up, we recorded a fantastic show, he's engaged and we're now besties and we go to Chipotle, talk shop, Sci-fi, acting, AIs, etc. Just a killer afternoon all around.
I head home and pull out the SD Card and put it into the PC and I see this. I almost vomit. I get lightheaded.
I've been recording the show for over 730 episodes over 14 years and I've never lost a show. I do my homework - as should you. I'm reeling. Ok, breathe. Let's work the problem.
Right click the drive, check properties. Breathe. This is a 32 gig drive, but Windows sees that it's got 329 MB used. 300ish megs is the size of a 30 minute long two channel WAV file. I know this because I've looked at 300 meg files for the last several hundred shows. Just like you might know roughly the size of a JPEG your camera makes. It's a thing you know.
Command line time. List the root directory. Empty. Check it again but "show all files," weird, there's a Mac folder there but maybe the SD Card was preformatted on a Mac.
Interesting Plot Point - I didn't format the SD card. I use it as it came out of the packaging from Amazon. It came preformatted and I accepted it. I tested it and it worked but I didn't "install my own carpet." I moved in to the house as-is.
What about a little "show me all folders from here down" action? Same as I saw in Windows Explorer. The root folder has another subfolder which is itself. It's folder "Inception" with no Kick!
G:\>dir /a Volume in drive G has no label. Volume Serial Number is 0403-0201 Directory of G:\ 03/12/2020 12:29 PM <DIR> 03/13/2020 12:44 PM <DIR> System Volume Information 0 File(s) 0 bytes 2 Dir(s) 30,954,225,664 bytes free G:\>dir /s Volume in drive G has no label. Volume Serial Number is 0403-0201 Directory of G:\ 03/12/2020 12:29 PM <DIR> 0 File(s) 0 bytes Directory of G:\ 03/12/2020 12:29 PM <DIR> 0 File(s) 0 bytes IT GOES FOREVER
Ok, the drive thinks there's data but I can't see it. I put the SD card back in the Zoom and try to play it back.
The Zoom can see folders and files AND the interview itself. And the Zoom can play it back. The Zoom is an embedded device with an implementation of the FAT32 file system and it can read it, but Windows can't. Can Linux? Can a Mac?
Short answer. No.
Hacky Note: Since the Zoom can see and play the file and it has a headphone/monitor jack, I could always plug in an analog 1/8" headphone cable to a 1/4" input on my Peavy PV6 Mixer and rescue the audio with some analog quality loss. Why don't I use the USB Audio out feature of the Zoom H6 and play the file back over a digital cable, you ask? Because the Zoom audio player doesn't support that. It supports three modes - SD Card Reader (which is a pass through to Windows and shows me the recursive directories and no files), an Audio pass-through which lets the Zoom look like an audio device to Windows but doesn't show the SD card as a drive or allow the SD Card to be played back over the digital interface, or its main mode where it's recording locally.
It's Forensics Time, Kids.
We have an 32 SD Card - a disk drive as it were - that is standard FAT32 formatted, that has 300-400 megs of a two-channel (Chris and I had two mics) WAV file that was recorded locally by the Zoom H6  audio reorder and I don't want too lose it or mess it up.
I need to take a byte for byte image of what's on the SD Card so I can poke and it and "virtually" mess with with it, change it, fix it, try again, without changing the physical.
"dd" is a command-line utility with a rich and storied history going back 45 years. Even though it means "Data Definition" it'll always be "disk drive" I my head.
How to clone a USB Drive or SD Card to an IMG file on Windows
I have a copy of dd for Windows which lets me get a byte for byte stream/file that represents this SD Card. For example I could get an entire USD device:
dd if=\\?\Device\Harddisk1\Partition0 of=c:\temp\usb2.img bs=1M --size --progress
I need to know the Harddisk number and Partition number as you can see above. I usually use diskpart for this.
>diskpart Microsoft DiskPart version 10.0.19041.1 Copyright (C) Microsoft Corporation. On computer: IRONHEART DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 476 GB 0 B * Disk 1 Online 1863 GB 0 B * Disk 2 Online 3725 GB 0 B Disk 3 Online 2794 GB 0 B * Disk 8 Online 29 GB 3072 KB DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> list part Partition ### Type Size Offset ------------- ---------------- ------- ------- Partition 1 Primary 29 GB 4096 KB
Looks like it's Disk 8 Partition 1 on my system. Let's get it all before I panic.
dd if=\\?\Device\Harddisk8\Partition1 of=c:\temp\ZOMG.img bs=1M --size --progress
IF and OF are input file and output file, and I will do it for the whole size of the SD Card. It's likely overkill though as we'll see in a second.
This file ended up being totally massive and hard to work with. Remember I needed just the first 400ish megs? I'll chop of just that part.
dd if=ZOMG.img of=SmallerZOMG.img bs=1M count=400
What is this though? Remember it's an image of a File System. It just bytes in a file. It's not a WAV file or a THIS file or a THAT file. I mean, it is if we decide it is, but in fact, a way to think about it is that it's a mangled envelope that is dark when I peer inside it. We're gonna have to feel around and see if we can rebuild a sense of what the contents really are.
Importing Raw Bytes from an IMG into Audition or Audacity
Both Adobe Audition and Audacity are audio apps that have an "Import RAW Data" feature. However, I DO need to tell Audition how to interpret it. There's lots of WAV files out there. How many simples were there? 1 channel? 2 channel? 16 bit or 32 bit? Lots of questions.
Can I just import this 4 gig byte array of a file system and get something?
Looks like something. You can see that the first part there is likely the start of the partition table, file system headers, etc. before audio data shows up. Here's importing as 2 channel.
I can hear voices but they sound like chipmunks and aren't understandable. Something is "doubled." Sample rate? No, I double checked it.
Here's 1 channel raw data import even though I think it's two.
Now THIS is interesting. I can hear audio at normal speed of us talking (after the preamble) BUT it's only a syllable at a time, and then a quieter version of the same syllable repeats. I don't want to (read: can't really) reassemble a 30 min interview from syllables, right?
Remember when I said that the Zoom H6 records a two channel file with one channel per mic? Not really. It records ONE FILE PER CHANNEL. A whateverL.wav and a whateverR.wav. I totally forgot!
This "one channel" file above is actually the bytes as they were laid down on disk, right? It's actually two files written simultaneously, a few kilobytes at a time, L,R,L,R,L,R. And here I am telling my sound software to treat this "byte for byte file system dump" as one file. It's two that were made at the same time.
It's like the Brundlefly. How do I tease it apart? Well I can't treat the array as a raw file anymore, it's not. And I want (really don't have the energy yet) to write my own little app to effectively de-interlace this image. I also don't know if the segment size is perfectly reliable or if it varies as the Zoom recorded.
NOTE: Pete Brown has written about RIFF/WAV files from Sound Devices records having an incorrect FAT32 bit set. This isn't that, but it's in the same family and is worth noting if you ever have an issue with a Broadcast Wave File getting corrupted or looking encrypted.
Whole helping me work this issue, Pete Brown tweeted a hexdump of the Directory Table so you can see the Zoom0001, Zoom0002, etc directories there in the image.
Let me move into Ubuntu on my Windows machine running WSL. Here I can run fdisk and get some sense of what this Image of the bad SD Card is. Remember also that I hacked off the first 0-400 Megs but this IMG file thinks it's a 32gig drive, because it is. It's just that's been aggressively truncated.
$ fdisk -u -l SmallerZOMG.img Disk SmallerZOMG.img: 400 MiB, 419430400 bytes, 819200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type SmallerZOMG.img1 8192 61157375 61149184 29.2G c W95 FAT32 (LBA)
Maybe I can "mount" this IMG? I make a folder on Ubuntu/WSL2 called ~/recovery. Yikes, ok there's nothing there. I can take the sector size 512 times the Start block of 8192 and use that as the offset.
sudo mount -o loop,offset=4194304 SmallerShit.img recover/ $ cd recover/ $ ll total 68 drwxr-xr-x 4 root root 32768 Dec 31 1969 ./
Ali Mosajjal thinks perhaps "they re-wrote the FAT32 structure definition and didn't use a standard library and made a mistake," and Leandro Pereria postulates "what could happen is that the LFN (long file name) checksum is invalid and they didn't bother filling in the 8.3 filename... so that complying implementations of VFAT tries to look at the fallback 8.3 name, it's all spaces and figures out "it's all padding, move along."
Ali suggested running dosfsck on the mounted image and you can see again that the files are there, but there's like 3 root entries? Note I've done a cat of /proc/mounts to see the loop that my img is mounted on so I can refer to it in the dosfsck command.
$ sudo dosfsck -w -r -l -a -v -t /dev/loop3 fsck.fat 4.1 (2017-01-24) Checking we can access the last sector of the filesystem Boot sector contents: System ID " " Media byte 0xf8 (hard disk) 512 bytes per logical sector 32768 bytes per cluster 1458 reserved sectors First FAT starts at byte 746496 (sector 1458) 2 FATs, 32 bit entries 3821056 bytes per FAT (= 7463 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 8388608 (sector 16384) 955200 data clusters (31299993600 bytes) 63 sectors/track, 255 heads 8192 hidden sectors 61149184 sectors total Checking file / Checking file / Checking file / Checking file /System Volume Information (SYSTEM~1) Checking file /. Checking file /.. Checking file /ZOOM0001 Checking file /ZOOM0002 Checking file /ZOOM0003 Checking file /ZOOM0001/. Checking file /ZOOM0001/.. Checking file /ZOOM0001/ZOOM0001.hprj (ZOOM00~1.HPR) Checking file /ZOOM0001/ZOOM0001_LR.WAV (ZOOM00~1.WAV) Checking file /ZOOM0002/. Checking file /ZOOM0002/.. Checking file /ZOOM0002/ZOOM0002.hprj (ZOOM00~1.HPR) Checking file /ZOOM0002/ZOOM0002_Tr1.WAV (ZOOM00~1.WAV) Checking file /ZOOM0002/ZOOM0002_Tr2.WAV (ZOOM00~2.WAV) Checking file /ZOOM0003/. Checking file /ZOOM0003/.. Checking file /ZOOM0003/ZOOM0003.hprj (ZOOM00~1.HPR) Checking file /ZOOM0003/ZOOM0003_Tr1.WAV (ZOOM00~1.WAV) Checking file /ZOOM0003/ZOOM0003_Tr2.WAV (ZOOM00~2.WAV) Checking file /System Volume Information/. Checking file /System Volume Information/.. Checking file /System Volume Information/WPSettings.dat (WPSETT~1.DAT) Checking file /System Volume Information/ClientRecoveryPasswordRotation (CLIENT~1) Checking file /System Volume Information/IndexerVolumeGuid (INDEXE~1) Checking file /System Volume Information/AadRecoveryPasswordDelete (AADREC~1) Checking file /System Volume Information/ClientRecoveryPasswordRotation/. Checking file /System Volume Information/ClientRecoveryPasswordRotation/.. Checking file /System Volume Information/AadRecoveryPasswordDelete/. Checking file /System Volume Information/AadRecoveryPasswordDelete/.. Checking for bad clusters.
We can see  them, but can't get at them with the vfat file system driver on Linux or with Windows.
The DUMP.exe util as part of mtools for Windows is amazing but I'm unable to figure out what is wrong in the FAT32 file table. I can run minfo on the Linux command land telling it to skip 8192 sectors in with the @@offset modifier:
$ minfo -i ZOMG.img@@8192S device information: =================== filename="ZOMG.img" sectors per track: 63 heads: 255 cylinders: 3807 mformat command line: mformat -T 61149184 -i ZOMG.img@@8192S -h 255 -s 63 -H 8192 :: bootsector information ====================== banner:" " sector size: 512 bytes cluster size: 64 sectors reserved (boot) sectors: 1458 fats: 2 max available root directory slots: 0 small size: 0 sectors media descriptor byte: 0xf8 sectors per fat: 0 sectors per track: 63 heads: 255 hidden sectors: 8192 big size: 61149184 sectors physical drive id: 0x80 reserved=0x0 dos4=0x29 serial number: 04030201 disk label=" " disk type="FAT32 " Big fatlen=7463 Extended flags=0x0000 FS version=0x0000 rootCluster=2 infoSector location=1 backup boot sector=6 Infosector: signature=0x41615252 free clusters=944648 last allocated cluster=10551
Ok, now we've found yet ANOTHER way to mount this corrupted file system. With mtools we'll use mdir to list the root directory. Note there is something wrong enough that I have to set mtools_skip_check=1 to ~/.mtoolsrc and continue.
$ mdir -i ZOMG.img@@8192S :: Total number of sectors (61149184) not a multiple of sectors per track (63)! Add mtools_skip_check=1 to your .mtoolsrc file to skip this test $ pico ~/.mtoolsrc $ mdir -i ZOMG.img@@8192S :: Volume in drive : is Volume Serial Number is 0403-0201 Directory for ::/ <DIR> 2020-03-12 12:29 1 file 0 bytes 30 954 225 664 bytes free
Same result. I can run mdu and see just a few folders. Note the ZOOMxxxx ones are missing here
$ mdu -i ZOMG.img@@8192S :: ::/System Volume Information/ClientRecoveryPasswordRotation 1 ::/System Volume Information/AadRecoveryPasswordDelete 1 ::/System Volume Information 5 ::/ 6
Now, ideally I want to achieve two things here.
Know WHY it's broken and exactly WHAT is wrong.
There's a nameless root directory here and I lack the patience and skill to manually hexdump and patch it.
Be able to copy the files out "normally" by mounting the IMG and, well, copying them out.
UPDATE #1 - I'm back after a few minutes of thinking again. If I do the 512*8192 offset again and visualize the FAT32 table in Hexdump/xxd like this:
xxd -seek 4194304 ZOMG.img | more 00400000: eb00 9020 2020 2020 2020 2000 0240 b205 ... ..@.. 00400010: 0200 0000 00f8 0000 3f00 ff00 0020 0000 ........?.... .. 00400020: 0010 a503 271d 0000 0000 0000 0200 0000 ....'........... 00400030: 0100 0600 0000 0000 0000 0000 0000 0000 ................ 00400040: 8000 2901 0203 0420 2020 2020 2020 2020 ..).... 00400050: 2020 4641 5433 3220 2020 0000 0000 0000 FAT32 ...... 00400060: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00400070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00400080: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00400090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 004000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 004000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 004000c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
I can see I seek'ed to the right spot, as the string FAT32 is just hanging out. Maybe I can clip out this table and visualize it in a better graphical tool.
I could grab a reasonable (read: arbitrary) chunk from this offset and put it in a very small manageable file:
dd if=ZOMG.img ibs=1 skip=4194304 count=64000 > another.img
And then load it in dump.exe on Windows which is really a heck of a tool. It seems to be thinking thinking there's multiple FAT Root Entries (which might be why I'm seeing this weird ghost root). Note the "should be" parts as well.
FAT Root Entry (non LFN) (0x00000000) Name: ··· Extension: Attribute: 0x00 FAT12:reserved: 02 40 B2 05 02 00 00 00 00 F8 FAT32:reserved: 02 FAT32:creation 10th: 0x40 FAT32:creation time: 0x05B2 FAT32:creation date: 0x0002 FAT32:last accessed: 0x0000 FAT32:hi word start cluster: 0xF800 Time: 0x0000 (00:00:00) (hms) Date: 0x003F (1980/01/31) (ymd) Starting Cluster: 0x00FF (0xF80000FF) File Size: 8192 FAT Root Entry (non LFN) (0x00000020) Name: ····'··· Extension: ··· Attribute: 0x00 FAT12:reserved: 02 00 00 00 01 00 06 00 00 00 FAT32:reserved: 02 FAT32:creation 10th: 0x00 FAT32:creation time: 0x0000 FAT32:creation date: 0x0001 FAT32:last accessed: 0x0006 FAT32:hi word start cluster: 0x0000 Time: 0x0000 (00:00:00) (hms) Date: 0x0000 (1980/00/00) (ymd) Starting Cluster: 0x0000 (0x00000000) <--- should be 0x0002 or higher. File Size: 0 FAT Root Entry (non LFN) (0x00000040) Name: ··)···· Extension: Attribute: 0x20 Archive FAT12:reserved: 20 20 20 20 20 20 46 41 54 33 FAT32:reserved: 20 FAT32:creation 10th: 0x20 FAT32:creation time: 0x2020 FAT32:creation date: 0x2020 FAT32:last accessed: 0x4146 FAT32:hi word start cluster: 0x3354 Time: 0x2032 (04:01:18) (hms) Date: 0x2020 (1996/01/00) (ymd) Starting Cluster: 0x0000 (0x33540000) File Size: 0 FAT Root Entry (non LFN) (0x00000060) Name: ········ Extension: ··· Attribute: 0x00 FAT12:reserved: 00 00 00 00 00 00 00 00 00 00 FAT32:reserved: 00 FAT32:creation 10th: 0x00 FAT32:creation time: 0x0000 FAT32:creation date: 0x0000 FAT32:last accessed: 0x0000 FAT32:hi word start cluster: 0x0000 Time: 0x0000 (00:00:00) (hms) Date: 0x0000 (1980/00/00) (ymd) Starting Cluster: 0x0000 (0x00000000) <--- should be 0x0002 or higher. File Size: 0 FAT Root Entry (non LFN) (0x00000080) Name: ········ Extension: ··· Attribute: 0x00 FAT12:reserved: 00 00 00 00 00 00 00 00 00 00 FAT32:reserved: 00 FAT32:creation 10th: 0x00 FAT32:creation time: 0x0000 FAT32:creation date: 0x0000 FAT32:last accessed: 0x0000 FAT32:hi word start cluster: 0x0000 Time: 0x0000 (00:00:00) (hms) Date: 0x0000 (1980/00/00) (ymd) Starting Cluster: 0x0000 (0x00000000) <--- should be 0x0002 or higher. File Size: 0 FAT32 Info Block (0x00000000) sig: 0x209000EB (' ···') [1] <--- should be 0x41615252. reserved: 00000004 20 20 20 20 20 20 20 00-02 40 B2 05 02 00 00 00 .........@...... 00000014 00 F8 00 00 3F 00 FF 00-00 20 00 00 00 10 A5 03 ....?........... 00000024 27 1D 00 00 00 00 00 00-02 00 00 00 01 00 06 00 '............... 00000034 00 00 00 00 00 00 00 00-00 00 00 00 80 00 29 01 ..............). 00000044 02 03 04 20 20 20 20 20-20 20 20 20 20 20 46 41 ..............FA 00000054 54 33 32 20 20 20 00 00-00 00 00 00 00 00 00 00 T32.............
The most confusing part is that the FAT32 signature - the magic number is always supposed to be 0x41615252. Google that. You'll see. It's a hardcoded signature but maybe I've got the wrong offset and at that point all bets are off.
So do I have that? I can search a binary file for Hex values with a combo of xxd and grep. Note the byte swap:
xxd another.img | grep "6141" 00000200: 5252 6141 0000 0000 0000 0000 0000 0000 RRaA............ 00000e00: 5252 6141 0000 0000 0000 0000 0000 0000 RRaA............
Just before this is 55 AA which is the last two bytes of the 64 byte partition table.
Now do I have two FAT32 info blocks and three Root Entries? I'm lost. I'll update this part as I learn more.
7zip all the things
Here's where it gets weird and it got so weird that both Pete Brown and I were like, WELL. THAT'S AMAZING.
On a whim I right-clicked the IMG file and opened it in 7zip and saw this.
See that directory there that's a nothing? A space? A something. It has no Short Name. It's an invalid entry but 7zip is cool with it. Let's go in. Watch the path and the \\. That's a path separator, nothing, and another path separator. That's not allowed or OK but again, 7zip is chill.
I dragged the files out and they're fine! The day is saved.
The moral? There are a few I can see.
Re-format the random SD cards you get from Amazon specifically on the device you're gonna use them.
FAT as a spec has a bunch of stuff that different "drivers" (Windows, VFAT, etc) may ignore or elide over or just not implement.
I've got 85% of the knowledge I need to spelunk something like this but that last 15% is a brick wall. I would need more patience and to read more about this.
Knowing how to do this is useful for any engineer. It's the equivalent of knowing how to drive a stick shift in an emergency even if you usually use Lyft.
I'm clearly not an expert but I do have a mental model that includes (but not limited to) bytes on the physical media, the file system itself, file tables, directory tables, partition tables, how they kinda work on Linux and Windows.
I clearly hit a wall as I know what I want to do but I'm not sure the next step.
There's a bad Directory Table Entry. I want to rename it and make sure it's complete and to spec.
7zip is amazing. Try it first for basically everything.
Ideally I'd be able to update this post with exactly what byte is wrong and how to fix it. Thanks to Ali, Pete, and Leandro for playing with me!
Your thoughts? (If you made it this far the truncated IMG of the 32 gig SD is here (500 megs) but you might have to pad it out with zeros to make some tools like it.
Oh, and listen to https://hanselminutes.com/ as the interview was great and it's coming soon!
Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      CSI: The case of the missing WAV audio files on the FAT32 SD Card published first on https://deskbysnafu.tumblr.com/
0 notes
philipholt · 5 years
Text
CSI: The case of the missing WAV audio files on the FAT32 SD Card
Buckle up kids, as this is a tale. As you may know, I have a lovely podcast at https://hanselminutes.com. You should listen.
Recently through an number of super cool random events I got the opportunity to interview actor Chris Conner who plays Poe on Altered Carbon. I'm a big fan of the show but especially Chris. You should watch the show because Poe is a joy and Chris owns every scene, and that's with a VERY strong cast.
I usually do my interviews remotely for the podcast but I wanted to meet Chris and hang out in person so I used my local podcasting rig which consists of a Zoom H6 recorder.
I have two Shure XLR mics, a Mic stand, and the Zoom. The Zoom H6 is a very well though of workhorse and I've used it many times before when recording shows. It's not rocket surgery but one should always test their things.
I didn't want to take any chances to I picked up a 5 pack of 32GIG high quality SD Cards. I put a new one in the Zoom, the Zoom immediately recognized the SD Card so I did a local recording right there and played it back. Sounds good. I played it back locally on the Zoom and I could hear the recording from the Zoom's local speaker. It's recording the file in stereo, one side for each mic. Remember this for later.
I went early to the meet and set up the whole recording setup. I hooked up a local monitor and tested again. Records and plays back locally. Cool. Chris shows up, we recorded a fantastic show, he's engaged and we're now besties and we go to Chipotle, talk shop, Sci-fi, acting, AIs, etc. Just a killer afternoon all around.
I head home and pull out the SD Card and put it into the PC and I see this. I almost vomit. I get lightheaded.
I've been recording the show for over 730 episodes over 14 years and I've never lost a show. I do my homework - as should you. I'm reeling. Ok, breathe. Let's work the problem.
Right click the drive, check properties. Breathe. This is a 32 gig drive, but Windows sees that it's got 329 MB used. 300ish megs is the size of a 30 minute long two channel WAV file. I know this because I've looked at 300 meg files for the last several hundred shows. Just like you might know roughly the size of a JPEG your camera makes. It's a thing you know.
Command line time. List the root directory. Empty. Check it again but "show all files," weird, there's a Mac folder there but maybe the SD Card was preformatted on a Mac.
Interesting Plot Point - I didn't format the SD card. I use it as it came out of the packaging from Amazon. It came preformatted and I accepted it. I tested it and it worked but I didn't "install my own carpet." I moved in to the house as-is.
What about a little "show me all folders from here down" action? Same as I saw in Windows Explorer. The root folder has another subfolder which is itself. It's folder "Inception" with no Kick!
G:\>dir /a Volume in drive G has no label. Volume Serial Number is 0403-0201 Directory of G:\ 03/12/2020 12:29 PM <DIR> 03/13/2020 12:44 PM <DIR> System Volume Information 0 File(s) 0 bytes 2 Dir(s) 30,954,225,664 bytes free G:\>dir /s Volume in drive G has no label. Volume Serial Number is 0403-0201 Directory of G:\ 03/12/2020 12:29 PM <DIR> 0 File(s) 0 bytes Directory of G:\ 03/12/2020 12:29 PM <DIR> 0 File(s) 0 bytes IT GOES FOREVER
Ok, the drive thinks there's data but I can't see it. I put the SD card back in the Zoom and try to play it back.
The Zoom can see folders and files AND the interview itself. And the Zoom can play it back. The Zoom is an embedded device with an implementation of the FAT32 file system and it can read it, but Windows can't. Can Linux? Can a Mac?
Short answer. No.
Hacky Note: Since the Zoom can see and play the file and it has a headphone/monitor jack, I could always plug in an analog 1/8" headphone cable to a 1/4" input on my Peavy PV6 Mixer and rescue the audio with some analog quality loss. Why don't I use the USB Audio out feature of the Zoom H6 and play the file back over a digital cable, you ask? Because the Zoom audio player doesn't support that. It supports three modes - SD Card Reader (which is a pass through to Windows and shows me the recursive directories and no files), an Audio pass-through which lets the Zoom look like an audio device to Windows but doesn't show the SD card as a drive or allow the SD Card to be played back over the digital interface, or its main mode where it's recording locally.
It's Forensics Time, Kids.
We have an 32 SD Card - a disk drive as it were - that is standard FAT32 formatted, that has 300-400 megs of a two-channel (Chris and I had two mics) WAV file that was recorded locally by the Zoom H6  audio reorder and I don't want too lose it or mess it up.
I need to take a byte for byte image of what's on the SD Card so I can poke and it and "virtually" mess with with it, change it, fix it, try again, without changing the physical.
"dd" is a command-line utility with a rich and storied history going back 45 years. Even though it means "Data Definition" it'll always be "disk drive" I my head.
How to clone a USB Drive or SD Card to an IMG file on Windows
I have a copy of dd for Windows which lets me get a byte for byte stream/file that represents this SD Card. For example I could get an entire USD device:
dd if=\\?\Device\Harddisk1\Partition0 of=c:\temp\usb2.img bs=1M --size --progress
I need to know the Harddisk number and Partition number as you can see above. I usually use diskpart for this.
>diskpart Microsoft DiskPart version 10.0.19041.1 Copyright (C) Microsoft Corporation. On computer: IRONHEART DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 476 GB 0 B * Disk 1 Online 1863 GB 0 B * Disk 2 Online 3725 GB 0 B Disk 3 Online 2794 GB 0 B * Disk 8 Online 29 GB 3072 KB DISKPART> select disk 8 Disk 8 is now the selected disk. DISKPART> list part Partition ### Type Size Offset ------------- ---------------- ------- ------- Partition 1 Primary 29 GB 4096 KB
Looks like it's Disk 8 Partition 1 on my system. Let's get it all before I panic.
dd if=\\?\Device\Harddisk8\Partition1 of=c:\temp\ZOMG.img bs=1M --size --progress
IF and OF are input file and output file, and I will do it for the whole size of the SD Card. It's likely overkill though as we'll see in a second.
This file ended up being totally massive and hard to work with. Remember I needed just the first 400ish megs? I'll chop of just that part.
dd if=ZOMG.img of=SmallerZOMG.img bs=1M count=400
What is this though? Remember it's an image of a File System. It just bytes in a file. It's not a WAV file or a THIS file or a THAT file. I mean, it is if we decide it is, but in fact, a way to think about it is that it's a mangled envelope that is dark when I peer inside it. We're gonna have to feel around and see if we can rebuild a sense of what the contents really are.
Importing Raw Bytes from an IMG into Audition or Audacity
Both Adobe Audition and Audacity are audio apps that have an "Import RAW Data" feature. However, I DO need to tell Audition how to interpret it. There's lots of WAV files out there. How many simples were there? 1 channel? 2 channel? 16 bit or 32 bit? Lots of questions.
Can I just import this 4 gig byte array of a file system and get something?
Looks like something. You can see that the first part there is likely the start of the partition table, file system headers, etc. before audio data shows up. Here's importing as 2 channel.
I can hear voices but they sound like chipmunks and aren't understandable. Something is "doubled." Sample rate? No, I double checked it.
Here's 1 channel raw data import even though I think it's two.
Now THIS is interesting. I can hear audio at normal speed of us talking (after the preamble) BUT it's only a syllable at a time, and then a quieter version of the same syllable repeats. I don't want to (read: can't really) reassemble a 30 min interview from syllables, right?
Remember when I said that the Zoom H6 records a two channel file with one channel per mic? Not really. It records ONE FILE PER CHANNEL. A whateverL.wav and a whateverR.wav. I totally forgot!
This "one channel" file above is actually the bytes as they were laid down on disk, right? It's actually two files written simultaneously, a few kilobytes at a time, L,R,L,R,L,R. And here I am telling my sound software to treat this "byte for byte file system dump" as one file. It's two that were made at the same time.
It's like the Brundlefly. How do I tease it apart? Well I can't treat the array as a raw file anymore, it's not. And I want (really don't have the energy yet) to write my own little app to effectively de-interlace this image. I also don't know if the segment size is perfectly reliable or if it varies as the Zoom recorded.
NOTE: Pete Brown has written about RIFF/WAV files from Sound Devices records having an incorrect FAT32 bit set. This isn't that, but it's in the same family and is worth noting if you ever have an issue with a Broadcast Wave File getting corrupted or looking encrypted.
Whole helping me work this issue, Pete Brown tweeted a hexdump of the Directory Table so you can see the Zoom0001, Zoom0002, etc directories there in the image.
Let me move into Ubuntu on my Windows machine running WSL. Here I can run fdisk and get some sense of what this Image of the bad SD Card is. Remember also that I hacked off the first 0-400 Megs but this IMG file thinks it's a 32gig drive, because it is. It's just that's been aggressively truncated.
$ fdisk -u -l SmallerZOMG.img Disk SmallerZOMG.img: 400 MiB, 419430400 bytes, 819200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type SmallerZOMG.img1 8192 61157375 61149184 29.2G c W95 FAT32 (LBA)
Maybe I can "mount" this IMG? I make a folder on Ubuntu/WSL2 called ~/recovery. Yikes, ok there's nothing there. I can take the sector size 512 times the Start block of 8192 and use that as the offset.
sudo mount -o loop,offset=4194304 SmallerShit.img recover/ $ cd recover/ $ ll total 68 drwxr-xr-x 4 root root 32768 Dec 31 1969 ./
Ali Mosajjal thinks perhaps "they re-wrote the FAT32 structure definition and didn't use a standard library and made a mistake," and Leandro Pereria postulates "what could happen is that the LFN (long file name) checksum is invalid and they didn't bother filling in the 8.3 filename... so that complying implementations of VFAT tries to look at the fallback 8.3 name, it's all spaces and figures out "it's all padding, move along."
Ali suggested running dosfsck on the mounted image and you can see again that the files are there, but there's like 3 root entries? Note I've done a cat of /proc/mounts to see the loop that my img is mounted on so I can refer to it in the dosfsck command.
$ sudo dosfsck -w -r -l -a -v -t /dev/loop3 fsck.fat 4.1 (2017-01-24) Checking we can access the last sector of the filesystem Boot sector contents: System ID " " Media byte 0xf8 (hard disk) 512 bytes per logical sector 32768 bytes per cluster 1458 reserved sectors First FAT starts at byte 746496 (sector 1458) 2 FATs, 32 bit entries 3821056 bytes per FAT (= 7463 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 8388608 (sector 16384) 955200 data clusters (31299993600 bytes) 63 sectors/track, 255 heads 8192 hidden sectors 61149184 sectors total Checking file / Checking file / Checking file / Checking file /System Volume Information (SYSTEM~1) Checking file /. Checking file /.. Checking file /ZOOM0001 Checking file /ZOOM0002 Checking file /ZOOM0003 Checking file /ZOOM0001/. Checking file /ZOOM0001/.. Checking file /ZOOM0001/ZOOM0001.hprj (ZOOM00~1.HPR) Checking file /ZOOM0001/ZOOM0001_LR.WAV (ZOOM00~1.WAV) Checking file /ZOOM0002/. Checking file /ZOOM0002/.. Checking file /ZOOM0002/ZOOM0002.hprj (ZOOM00~1.HPR) Checking file /ZOOM0002/ZOOM0002_Tr1.WAV (ZOOM00~1.WAV) Checking file /ZOOM0002/ZOOM0002_Tr2.WAV (ZOOM00~2.WAV) Checking file /ZOOM0003/. Checking file /ZOOM0003/.. Checking file /ZOOM0003/ZOOM0003.hprj (ZOOM00~1.HPR) Checking file /ZOOM0003/ZOOM0003_Tr1.WAV (ZOOM00~1.WAV) Checking file /ZOOM0003/ZOOM0003_Tr2.WAV (ZOOM00~2.WAV) Checking file /System Volume Information/. Checking file /System Volume Information/.. Checking file /System Volume Information/WPSettings.dat (WPSETT~1.DAT) Checking file /System Volume Information/ClientRecoveryPasswordRotation (CLIENT~1) Checking file /System Volume Information/IndexerVolumeGuid (INDEXE~1) Checking file /System Volume Information/AadRecoveryPasswordDelete (AADREC~1) Checking file /System Volume Information/ClientRecoveryPasswordRotation/. Checking file /System Volume Information/ClientRecoveryPasswordRotation/.. Checking file /System Volume Information/AadRecoveryPasswordDelete/. Checking file /System Volume Information/AadRecoveryPasswordDelete/.. Checking for bad clusters.
We can see  them, but can't get at them with the vfat file system driver on Linux or with Windows.
The DUMP.exe util as part of mtools for Windows is amazing but I'm unable to figure out what is wrong in the FAT32 file table. I can run minfo on the Linux command land telling it to skip 8192 sectors in with the @@offset modifier:
$ minfo -i ZOMG.img@@8192S device information: =================== filename="ZOMG.img" sectors per track: 63 heads: 255 cylinders: 3807 mformat command line: mformat -T 61149184 -i ZOMG.img@@8192S -h 255 -s 63 -H 8192 :: bootsector information ====================== banner:" " sector size: 512 bytes cluster size: 64 sectors reserved (boot) sectors: 1458 fats: 2 max available root directory slots: 0 small size: 0 sectors media descriptor byte: 0xf8 sectors per fat: 0 sectors per track: 63 heads: 255 hidden sectors: 8192 big size: 61149184 sectors physical drive id: 0x80 reserved=0x0 dos4=0x29 serial number: 04030201 disk label=" " disk type="FAT32 " Big fatlen=7463 Extended flags=0x0000 FS version=0x0000 rootCluster=2 infoSector location=1 backup boot sector=6 Infosector: signature=0x41615252 free clusters=944648 last allocated cluster=10551
Ok, now we've found yet ANOTHER way to mount this corrupted file system. With mtools we'll use mdir to list the root directory. Note there is something wrong enough that I have to set mtools_skip_check=1 to ~/.mtoolsrc and continue.
$ mdir -i ZOMG.img@@8192S :: Total number of sectors (61149184) not a multiple of sectors per track (63)! Add mtools_skip_check=1 to your .mtoolsrc file to skip this test $ pico ~/.mtoolsrc $ mdir -i ZOMG.img@@8192S :: Volume in drive : is Volume Serial Number is 0403-0201 Directory for ::/ <DIR> 2020-03-12 12:29 1 file 0 bytes 30 954 225 664 bytes free
Same result. I can run mdu and see just a few folders. Note the ZOOMxxxx ones are missing here
$ mdu -i ZOMG.img@@8192S :: ::/System Volume Information/ClientRecoveryPasswordRotation 1 ::/System Volume Information/AadRecoveryPasswordDelete 1 ::/System Volume Information 5 ::/ 6
Now, ideally I want to achieve two things here.
Know WHY it's broken and exactly WHAT is wrong.
There's a nameless root directory here and I lack the patience and skill to manually hexdump and patch it.
Be able to copy the files out "normally" by mounting the IMG and, well, copying them out.
UPDATE #1 - I'm back after a few minutes of thinking again.
If I use mmls from Sleuthkit, I can see this.
$ mmls HolyShit.img DOS Partition Table Offset Sector: 0 Units are in 512-byte sectors Slot Start End Length Description 000: Meta 0000000000 0000000000 0000000001 Primary Table (#0) 001: ------- 0000000000 0000008191 0000008192 Unallocated 002: 000:000 0000008192 0061157375 0061149184 Win95 FAT32 (0x0c)
If I do the 512*8192 offset again and visualize the FAT32 table in Hexdump/xxd like this:
xxd -seek 4194304 ZOMG.img | more 00400000: eb00 9020 2020 2020 2020 2000 0240 b205 ... ..@.. 00400010: 0200 0000 00f8 0000 3f00 ff00 0020 0000 ........?.... .. 00400020: 0010 a503 271d 0000 0000 0000 0200 0000 ....'........... 00400030: 0100 0600 0000 0000 0000 0000 0000 0000 ................ 00400040: 8000 2901 0203 0420 2020 2020 2020 2020 ..).... 00400050: 2020 4641 5433 3220 2020 0000 0000 0000 FAT32 ...... 00400060: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00400070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00400080: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00400090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 004000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 004000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 004000c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
I can see I seek'ed to the right spot, as the string FAT32 is just hanging out. Maybe I can clip out this table and visualize it in a better graphical tool.
I could grab a reasonable (read: arbitrary) chunk from this offset and put it in a very small manageable file:
dd if=ZOMG.img ibs=1 skip=4194304 count=64000 > another.img
And then load it in dump.exe on Windows which is really a heck of a tool. It seems to be thinking thinking there's multiple FAT Root Entries (which might be why I'm seeing this weird ghost root). Note the "should be" parts as well.
FAT Root Entry (non LFN) (0x00000000) Name: ··· Extension: Attribute: 0x00 FAT12:reserved: 02 40 B2 05 02 00 00 00 00 F8 FAT32:reserved: 02 FAT32:creation 10th: 0x40 FAT32:creation time: 0x05B2 FAT32:creation date: 0x0002 FAT32:last accessed: 0x0000 FAT32:hi word start cluster: 0xF800 Time: 0x0000 (00:00:00) (hms) Date: 0x003F (1980/01/31) (ymd) Starting Cluster: 0x00FF (0xF80000FF) File Size: 8192 FAT Root Entry (non LFN) (0x00000020) Name: ····'··· Extension: ··· Attribute: 0x00 FAT12:reserved: 02 00 00 00 01 00 06 00 00 00 FAT32:reserved: 02 FAT32:creation 10th: 0x00 FAT32:creation time: 0x0000 FAT32:creation date: 0x0001 FAT32:last accessed: 0x0006 FAT32:hi word start cluster: 0x0000 Time: 0x0000 (00:00:00) (hms) Date: 0x0000 (1980/00/00) (ymd) Starting Cluster: 0x0000 (0x00000000) <--- should be 0x0002 or higher. File Size: 0 FAT Root Entry (non LFN) (0x00000040) Name: ··)···· Extension: Attribute: 0x20 Archive FAT12:reserved: 20 20 20 20 20 20 46 41 54 33 FAT32:reserved: 20 FAT32:creation 10th: 0x20 FAT32:creation time: 0x2020 FAT32:creation date: 0x2020 FAT32:last accessed: 0x4146 FAT32:hi word start cluster: 0x3354 Time: 0x2032 (04:01:18) (hms) Date: 0x2020 (1996/01/00) (ymd) Starting Cluster: 0x0000 (0x33540000) File Size: 0 FAT Root Entry (non LFN) (0x00000060) Name: ········ Extension: ··· Attribute: 0x00 FAT12:reserved: 00 00 00 00 00 00 00 00 00 00 FAT32:reserved: 00 FAT32:creation 10th: 0x00 FAT32:creation time: 0x0000 FAT32:creation date: 0x0000 FAT32:last accessed: 0x0000 FAT32:hi word start cluster: 0x0000 Time: 0x0000 (00:00:00) (hms) Date: 0x0000 (1980/00/00) (ymd) Starting Cluster: 0x0000 (0x00000000) <--- should be 0x0002 or higher. File Size: 0 FAT Root Entry (non LFN) (0x00000080) Name: ········ Extension: ··· Attribute: 0x00 FAT12:reserved: 00 00 00 00 00 00 00 00 00 00 FAT32:reserved: 00 FAT32:creation 10th: 0x00 FAT32:creation time: 0x0000 FAT32:creation date: 0x0000 FAT32:last accessed: 0x0000 FAT32:hi word start cluster: 0x0000 Time: 0x0000 (00:00:00) (hms) Date: 0x0000 (1980/00/00) (ymd) Starting Cluster: 0x0000 (0x00000000) <--- should be 0x0002 or higher. File Size: 0 FAT32 Info Block (0x00000000) sig: 0x209000EB (' ···') [1] <--- should be 0x41615252. reserved: 00000004 20 20 20 20 20 20 20 00-02 40 B2 05 02 00 00 00 .........@...... 00000014 00 F8 00 00 3F 00 FF 00-00 20 00 00 00 10 A5 03 ....?........... 00000024 27 1D 00 00 00 00 00 00-02 00 00 00 01 00 06 00 '............... 00000034 00 00 00 00 00 00 00 00-00 00 00 00 80 00 29 01 ..............). 00000044 02 03 04 20 20 20 20 20-20 20 20 20 20 20 46 41 ..............FA 00000054 54 33 32 20 20 20 00 00-00 00 00 00 00 00 00 00 T32.............
The most confusing part is that the FAT32 signature - the magic number is always supposed to be 0x41615252. Google that. You'll see. It's a hardcoded signature but maybe I've got the wrong offset and at that point all bets are off.
So do I have that? I can search a binary file for Hex values with a combo of xxd and grep. Note the byte swap:
xxd another.img | grep "6141" 00000200: 5252 6141 0000 0000 0000 0000 0000 0000 RRaA............ 00000e00: 5252 6141 0000 0000 0000 0000 0000 0000 RRaA............
Just before this is 55 AA which is the last two bytes of the 64 byte partition table. mm
Now do I have two FAT32 info blocks and three Root Entries? I'm lost. I want to dump the directory entries.
What does fsstat say about the Root Directory?
File System Layout (in sectors) Total Range: 0 - 61149183 * Reserved: 0 - 1457 ** Boot Sector: 0 ** FS Info Sector: 1 ** Backup Boot Sector: 6 * FAT 0: 1458 - 8920 * FAT 1: 8921 - 16383 * Data Area: 16384 - 61149183 ** Cluster Area: 16384 - 61149183 *** Root Directory: 16384 - 16447
I'll update this part as I learn more. I'm exhausted. Someone will likely read this and be like "you dork, seek HERE" and there's the byte that's wrong in the file system. That LFN (long file name) has no short one, etc" and then I'll know.
UPDATE #2:
I skyped with Ali and we think we know what's up. He suggested I format the SD Card, record the same 3 shows (two test WAVs and one actual one) and then make an image of the GOOD disk to remove variables. Smart guy!
We then took the first 12 megs or so of the GOOD.img and the BAD.img and piped them through xxd into HEX, then used Visual Studio Code to diff them.
We can now visualize on the left what a good directory structure looks like and the right what a bad one looks like. Seems like I do have two recursive root directories with a space for the name.
Now if we wanted we could manually rewrite a complete new directory entry and assign our orphaned files to it.
That's what I would do if I was hired to recover data.
7zip all the things
Here's where it gets weird and it got so weird that both Pete Brown and I were like, WELL. THAT'S AMAZING.
On a whim I right-clicked the IMG file and opened it in 7zip and saw this.
See that directory there that's a nothing? A space? A something. It has no Short Name. It's an invalid entry but 7zip is cool with it. Let's go in. Watch the path and the \\. That's a path separator, nothing, and another path separator. That's not allowed or OK but again, 7zip is chill.
I dragged the files out and they're fine! The day is saved.
The moral? There are a few I can see.
Re-format the random SD cards you get from Amazon specifically on the device you're gonna use them.
FAT as a spec has a bunch of stuff that different "drivers" (Windows, VFAT, etc) may ignore or elide over or just not implement.
I've got 85% of the knowledge I need to spelunk something like this but that last 15% is a brick wall. I would need more patience and to read more about this.
Knowing how to do this is useful for any engineer. It's the equivalent of knowing how to drive a stick shift in an emergency even if you usually use Lyft.
I'm clearly not an expert but I do have a mental model that includes (but not limited to) bytes on the physical media, the file system itself, file tables, directory tables, partition tables, how they kinda work on Linux and Windows.
I clearly hit a wall as I know what I want to do but I'm not sure the next step.
There's a bad Directory Table Entry. I want to rename it and make sure it's complete and to spec.
7zip is amazing. Try it first for basically everything.
Ideally I'd be able to update this post with exactly what byte is wrong and how to fix it. Thanks to Ali, Pete, and Leandro for playing with me!
Your thoughts? (If you made it this far the truncated IMG of the 32 gig SD is here (500 megs) but you might have to pad it out with zeros to make some tools like it.
Oh, and listen to https://hanselminutes.com/ as the interview was great and it's coming soon!
Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      CSI: The case of the missing WAV audio files on the FAT32 SD Card published first on http://7elementswd.tumblr.com/
0 notes
savviscouts · 7 years
Text
We recently took a cruise (my first!) out of Venice. It was a lovely time! Venice, however, was the worst. Here’s how to avoid most of the horrible-ness we encountered.
Getting to Venice
The road to Venice, from my house near Munich, Germany, was paved with trials and tribulations. I’m not saying that you shouldn’t drive to Venice from Germany, but I am saying to be emotionally prepared for the unexpected. For example, your car could break down, causing you to arrive eight hours later than originally planned. You might miss this amazing Venetian food and wine tour that you had already paid for. You could almost, quite literally, miss your boat.
My advice? Be prepared for the worst to happen, and then just make the most of it! Also, take the train. Because if something happens to your car, it can make your already-expensive vacation turn into a budget-busting nightmare.
(For other options, check out my post on transportation around Europe!)
Because of this fiasco (there’s a good Italian word for ya!), we didn’t have much time in Venice at all. We missed an awesome wine and Venetian tapas tour I had already paid for. We didn’t get good Venetian food. Also, we only had a couple of hours to wander around and grab a street pizza before hopping on our cruise ship. So I can’t really tell you much of what to do in Venice, but I can definitely tell you a couple of things to definitely not do.
Things Not To Do in Venice
#1. Don’t Trust the Servers Implicitly
I don’t mean this in a mean or aggressive way at all, I’m just letting you know that your restaurant server is out to get you, and do not leave anything implicitly open. For example, do not say, “Oh I would like a bottle of the Merlot please.”
Instead, say, “I would like a bottle of the XX Merlot that costs 30 euros please.” Otherwise, they will pick the most expensive bottle of Merlot, not tell you, and you’ll get a huge surprise on your bill.
My Experience:
I ordered “a bottle of Cabernet Sauvignon”. Based off of the bill that came later, it was not the cheap 22 euro bottle that I thought I had ordered. Be specific.
Subsequent research has shown this bottle to be anywhere between $10-$80. So I don’t know…
  #2. Don’t Trust the Servers’ Recommendations
This sounds like the same thing, but I promise you it’s not. If your server offers you a special meal that you will “just love”, ask where it is on the menu so that you can check the price. If it’s not on the menu, ask for the price. It’s not rude for you to ask for the price, and they’re expecting you to think that because they’re probably trying to screw you over.
My Experience:
The server offered us this delicious sounding fish, and a raw fish appetizer sampler that I couldn’t find on the menu. I tried to cancel the app after I ordered it, but he talked me out of it. The appetizer wasn’t good at all, and the fish was actually nice! They de-boned it for us, though, and left half the meat on the fish.
fish de-bone gif
#3. Check the Restaurant Ratings Before Sitting Down
But all the restaurants are basically the same, right?
No!
The food may be pretty similar, but the ethics may not be.
I waited until after our horrible dinner in Venice to check the TripAdvisor ratings for the restaurant we randomly picked, and, it turns out, they had horrible ratings. Not because the food was completely horrible, but because all of the servers tried to screw you over.
My Experience:
Everything on the menu was around 20 euros, so we expected the bill to be around 60 ish euros (with wine, appetizer, and fish.) We got the bill. It said 98 euros. Boyfriend put down 100 euros, and we got up to leave.
The server came running back- “Sir, sir! This isn’t the correct amount! The bill isn’t 98 euros. That’s the price if you guys decided to split it. It’s 196 euros!”
Sure enough, in smaller font below the 98 was a 196. I handed boyfriend a 50 euro note, he threw on another 50, and we left.
At this point, after our super-duper horrible no good very bad day, we kind of thought it was hilarious. We had to get more cash out of the ATM before bed. Whatever.
Back at the hotel, I got on TripAdvisor to give them a bad rating, and most of the reviews were, “Tourist trap! Stay Away!”
I took this to remember the restaurant name after I realized my horrible mistake. Next time you find yourself in Venice, DO NOT EAT AT Ai do Fradei!
Things to Do in Venice
#1. Enjoy Something
Regardless of any nightmares that may have occurred on your trip to Venice, or any horrible overpriced meals you may encounter, don’t let it affect your attitude! You’re only there when you’re there, so you might as well take a goofy picture in front of the canals and enjoy yourself.
Me, being the world’s best tourist
#2. Get Your Boat Bus Ticket First Thing
You may think you won’t need a boat bus. Unless your hotel is super close to the train station, you’ll wish you were on a boat and not climbing stairs to another bridge every few hundred feet with your suitcase. Also, you can’t get tickets at every boat bus stop. You have to buy them at the main stop directly across from you when you exit the train station. Even if you don’t want the boat buses when you first get there, you probably will the next morning. Unless you want to pay for a boat taxi, which is fine. Just be sure to negotiate the price before you get into the boat.
In Summation
Don’t trust people. Ask for prices. Make sure that you still enjoy yourself, regardless of any outside circumstances. (This really applies to all traveling.)
#gallery-0-6 { margin: auto; } #gallery-0-6 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-6 img { border: 2px solid #cfcfcf; } #gallery-0-6 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
The engine light that went off, just before we slowed down to 50 kph and couldn’t speed up. On the autobahn
Awesome-looking strawberry stand we passed on the way down, pre-car troubles
Beautiful views of Venice
The Venice town troubadour, live in the flesh
Me, being the world’s best tourist
Us, being all touristy and determined not to let negativity affect us!
Town troubadour strikes again
When we finally made it to our boat- what a beautiful sight!
Our faces when we dropped off our luggage and were getting on the boat at last!
How *Not* to Enjoy Venice We recently took a cruise (my first!) out of Venice. It was a lovely time! Venice, however, was the worst.
1 note · View note
heididahlsveen · 7 years
Text
Yesterday I left my pink home. I’ve had it for a while now, but yesterday I closed the door or teleported to another place.
My virtual life started in 2008. It was due to a program called “Facts on Saturday” here in Norway. Just that Saturday I watched it, the program was documentary about people who lived a virtual life, for example, the documentary followed an American woman who left her marriage because she had found the great love in virtual life. I remember I thought, “How is it possible to be so stupid? I have to try that! ”
[aesop_image imgwidth=”300%” img=”http://www.fortellerkunstner.no/wp-content/uploads/2017/06/446liten.jpg&#8221; align=”center” lightbox=”on” captionposition=”left” revealfx=”off”] #gallery-0-21 { margin: auto; } #gallery-0-21 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-21 img { border: 2px solid #cfcfcf; } #gallery-0-21 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
When I started my virtual life, when I became a living in the metaverse, I knew I had to be true to what I consider as my call in life: To tell stories with special focus on the folk material. And to be a mere citizen of a 3D world was not enough of me, I had to be an active citizen, I had to create. First I started making poses and tried to understand the magic of taking pictures. Somehow, I still tried to tell stories, but my competence in taking pictures is and was very limited, and I really did not have the patience to understand photoshop; sometimes I have been lucky, usually not.
[aesop_image imgwidth=”300%” img=”http://www.fortellerkunstner.no/wp-content/uploads/2017/06/452liten.jpg&#8221; align=”center” lightbox=”on” captionposition=”left” revealfx=”off”] #gallery-0-22 { margin: auto; } #gallery-0-22 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-22 img { border: 2px solid #cfcfcf; } #gallery-0-22 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
After a certain time in the pixel life, you become yourself, you feel like you need home, where you can undress ^^ and invite friends. I did not want to rent, I wanted to buy, to own my land. So I bought a full region, a sim from LL. The making of a home was not my main purpose, the making of a folktale was. I wanted people to walk into a Norwegian folktale and create their own story. Now, I was not a creator, I was not a builder, I can rezz a prim and that is all – so I hired builders, among them Soror Nishi, to make the image of the folktale, my role was a kind of storyteller, or maybe rather a curator. The first companion opened in December 2009, and had an amazingly numbers of visitors.
The interest of a place is shortlived, soon there is an another place to visit and experience. I closed it for visitors to have it rebuild. This time I hired the incredible hobbit Andrek Lowell, to create the landscape of the story. The sim reopened in December 2010 – not as much visited this time as first time. Then I gave up, it became too expensive to own a full sim.
[aesop_image imgwidth=”300%” img=”http://www.fortellerkunstner.no/wp-content/uploads/2017/06/447liten.jpg&#8221; align=”center” lightbox=”on” captionposition=”left” revealfx=”off”] #gallery-0-23 { margin: auto; } #gallery-0-23 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-23 img { border: 2px solid #cfcfcf; } #gallery-0-23 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Since then I was lucky to participate in different art projects with other creative members of the metaverse. I learned a lot, I wrote…. And then it stopped. The last years, the only thing that has kept me coming back, to log in every morning has been the virtual animals.
I’ve had my dedication to virtual animals over the regular, I would say. I started with bunnies created by Ozimals and continued with the fantasy creatures called Meeroos. A virtual animal or AI animal is an animal that can reproduce itself and often there are some structures related to this, whether you want to breed a particular gen or mutate or create an elite. In many ways, virtual animal breeding has similarities to real life, except that it goes much faster in the virtual world and the animal, in prinsiple never dies. But you have to make sure to feed the animal otherwise you will lose it. I did this in 2009 to 2012.
[aesop_image imgwidth=”300%” img=”http://www.fortellerkunstner.no/wp-content/uploads/2017/06/450liten.jpg&#8221; align=”center” lightbox=”on” captionposition=”left” revealfx=”off”] #gallery-0-24 { margin: auto; } #gallery-0-24 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-24 img { border: 2px solid #cfcfcf; } #gallery-0-24 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
In 2014, I went back to Ozimals. Which did not happen in silence at all, when one of the creators / owners contacted me:
Aeron Constantine: thanks for the Ozimals support onomatopoetikon, I have to know — what does your name mean? heh onomatopoetikon (mimesis.monday): Oh sorry did not see this, means sound of words like: grrr, iik etc. Old customer found her way back. Aeron Constantine: /me grins, I remember you Aeron Constantine: well… one of you Aeron Constantine: well welcome back to Ozimals, glad to have your support after all these years Aeron Constantine: (This is Malk onomatopoetikon (mimesis.monday): thank you, yes. Lovely to be back and figure things out. Aeron Constantine: /me smiles, a few changes a long the way – nothing terribly crazy I don’t think Aeron Constantine: lots of fun new things though onomatopoetikon (mimesis.monday): Wonderful Aeron Constantine: let me know if you need any help getting reacquainted Aeron Constantine: if you haven’t seen it yet, we do have a helpful trait list on the wiki now onomatopoetikon (mimesis.monday): thank you, so far all fine, the basic is the same. onomatopoetikon (mimesis.monday): yes, I have looked at it Aeron Constantine: http://wiki.ozimals.com/index.php?title=Trait_Timeline Aeron Constantine: oh great 🙂 onomatopoetikon (mimesis.monday): Thank you Aeron Constantine: np!
[aesop_image imgwidth=”300%” img=”http://www.fortellerkunstner.no/wp-content/uploads/2017/06/pinkhome5_001.jpg&#8221; align=”center” lightbox=”on” captionposition=”left” revealfx=”off”] #gallery-0-25 { margin: auto; } #gallery-0-25 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-25 img { border: 2px solid #cfcfcf; } #gallery-0-25 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Aeron Constantine: have a great day, sorry to interrupt but welcome back :3 onomatopoetikon (mimesis.monday): No interuption and thank you the same Aeron Constantine: of course, were there any bunnies you’ve lost or have gone back to Oz that you can’t live without? 🙂 onomatopoetikon (mimesis.monday): Well, now I would like to see what I can breed, and luckily I found one mini rex chocolate I though would be hard to get. onomatopoetikon (mimesis.monday): found on the market Aeron Constantine: it’s getting harder and harder to locate older original furs onomatopoetikon (mimesis.monday): Yes, I would think so. Aeron Constantine: we’ve retired A LOT onomatopoetikon (mimesis.monday): I saw on the list Aeron Constantine: check out Mini Rex Too Chocolate Aeron Constantine: we recreated from scratch 3 of the original breeds for a LE onomatopoetikon (mimesis.monday): I have not seen that onomatopoetikon (mimesis.monday): oh how lovely Aeron Constantine: want to be surprised or a link to a photo? Heh onomatopoetikon (mimesis.monday): yes, would love to see a photo Aeron Constantine: https://www.flickr.com/search/groups/?w=1184616%40N20&m=pool&q=Mini%20Rex%20Too%20Chocolate Aeron Constantine: have 3 :3 onomatopoetikon (mimesis.monday): thank you Aeron Constantine: no problem onomatopoetikon (mimesis.monday): beautiful
Then suddenly, 14 days ago, Ozimals disappeared. And now there is no reason to log in anymore. Maybe it is for the best.
Bye, bye my pink home Yesterday I left my pink home. I've had it for a while now, but yesterday I closed the door or teleported to another place.
3 notes · View notes
kotorisaka · 8 years
Text
170110 Watanabe Risa Model Press Interview Translation
Tumblr media
Looking back at 2016
Tumblr media
―How was 2016?
A satisfying year. I have experienced a lot of things. It was seriously deep. The most difficult thing was the drama (Tokuyama Daigoro wo Dare ga Koroshitaka?). We were shooting indoor all the time, so I didn’t know what day it is today or how the weather is outside. The same routine went on and on for several days; it was completely emotional.
―Conversely, what would be the most enjoyable thing?
It’s hard to choose since everything was fun, but it has to be the one-man live that went on Christmas day recently. Those two days were delightful since I like doing live show as well as moving my body. TAKAHIRO, our choreographer, gave us detailed revisions during the rehearsal and lesson time, resulting us to repeat the dance moves over and over again. We exchanged opinions among fellow members, too. I think that was really a show that everyone worked hard for.
The climacteric haircut
Tumblr media
―In the 2nd single “Sekai ni wa Ai Shikanai” you were chosen as the front member and from July to December, also appeared in women’s magazines such as LARME one after another. At the Girls Award 2016 AUTUMN / WINTER in October, you also walked on the runway as a model. I think that having a short haircut became a major turning point.
Well, I think chopping 20 centimeters off my hair was huge. I couldn’t imagine myself with short hair then, but hearing what people around thought of me was really nice. It has grown a lot now, though. I am enjoying various arrangements, not long, but medium length around the shoulders.
Walking the Girls Award runway is a valuable experience for me. Aside from being taught about trivial stuffs like the speed, attitude, and eye gaze while catwalking by the mentor, I also practiced hard for the day. Actually I was nervous, but I think that I will be able to go for it again someday.
―Showing up in fashion shows and women’s magazines are good opportunities to be recognized by the same-sexed people.
There are even girls who say “I like seeing you in fashion magazines” at the handshake meeting and it makes me happy. Therefore, I want do better for each appearance in the future. I want the females to see the gap Keyakizaka46 have since we performed a lot of cool songs. Everyone usually speaks in bright manner; I think they are even noisy but when they get on stage, their facial expressions immediately change.
Look at me here!
Tumblr media
―Personally, is there any point about you that needs to be paid attention to?
I’m often mistaken for my cool character; I’m quite the energetic type. My excitement level is also high (laughs). Especially when I’m with the members! Recently in various places, I think that I may be able to show more of my usual self little by little.
―Do you mean that the experiences you put up with has also changed your inner self?
Even if I’m shy now, I think I have changed a little. In the previous interviews, too, I would start to open up after remained silent for a while, but now I have become able to convey my thoughts immediately. I think I’ve grown up although there are still a lot of times when I get nervous. Before the live and whatnot, the members do such things as hitting their backs with the stage’s wing curtains or shouting out loud to relieve the tension (laughs). It’s pretty much relaxing.
―Loudly? I can hardly imagine it (laughs).
I shout things like “Uwaaaah!” with my biggest voice (laughs). Since “Overture” is playing while I’m yelling, it can’t be heard from a far. No matter how loud it is.
2017 resolutions
Tumblr media
―Please tell us your aspiration for 2017.
I’d like to do a tour as a group. I’d like to have more places to go and to see more people knowing the existence of Keyakizaka46 because I love concerts. As for myself, I am trying to voice my opinions out more, and to set myself a bit more. I’m very interested in model’s activities since I also like fashion. I believe there will be plenty of hardships, but I want to progessively challenge myself. Keyakizaka46 is really encouraged because of its members; everyone has different personalities. I should take all the good points and also give my strength if I can. I will do better than last year in 2017!
The secret to fulfilling a dream
Tumblr media
―Please give a message to someone out there who is trying to make her dream come true.
Do research. It’s important to find a method that suits yourself in order to achive a goal. In addition, I think that it would be better to say “I want to do this” rather than “I want to be like this”.
―What is your dream?
Dream? Is anything fine? …I haven’t decided where, but I want to live in other country (laughs). I have never been abroad yet in my life… (smiles bitterly). Going on overseas business trip is okay, but I want my first to be private. Moreover, I decided it was Hawaii (laughs). My career dream is having a concert held at a big venue.
[I omitted the intro and outro of the original article, by the way.]
7 notes · View notes
Text
Manage Quotes
Official Website: Manage Quotes
(adsbygoogle = window.adsbygoogle || []).push();
• A busy person is usually the most efficient because they know how to manage their time. That’s something I learned through dancing all through school and all throughout my life. – Lindsay Arnold • A lot of bands are going out and playing for nothing. A lot of bands will go out and get paid, but the gas tank will eat up their paycheck. When they manage to sell a t-shirt or two, there is a little bit of leftover money there so that they don’t have to have McDonalds that day. They can actually eat something decent with possibly a bit of cash leftover. It’s a huge part of the business now. – Matt Snell • A novel may take anywhere from two to five years to write and, in the end, you might manage a couple of thousand dollars on it, no more. – Mordecai Richler • A very strong player can manage and can just know how to manage a thousand positions. I get it; it’s a very arbitrary number. So then you have the world champion who could do more. But, again, any increase in numbers creates, sort of, a new level of playing. And then you go to the very top, and the difference is so minimal, but it does exist. So even a few players who never became world champion, like Vassily Ivanchuk, for instance, I think they belong to the same category. – Garry Kasparov • Actually, I don’t get to do it (watch 5 or so news shows) every day, but I manage to do it at least 5 times a week. And the rest of the time I’m doing interviews. I do an amazing amount of interviews. – Frank Zappa • AI’s ability to recognize visual categories and images is now pretty close to what human beings can manage, and probably better than a lot of people’s, actually. AI can have more knowledge of detailed categories, like animals and so on. – Stuart J. Russell • All you really have when you’re acting is the confidence and your ability to manage and tell a story by creating a character. – Billy Crudup • Almost all human who can form a sentence will eventually let you in on the fact that their lives are very difficult and sometimes very hard to manage. – Henry Rollins • And a united Europe will also manage to send hundreds of thousands of migrants, who don’t have the right to asylum, back to their homelands. Though that, given the number of flights necessary, would be of a scale reminiscent of the Berlin Airlift. – Paolo Gentiloni • And one of the things I find most moving is the way people with infirmities manage to embrace Life, and from the cool flowers by the wayside reach conclusions about the vast splendour of its great gardens. They can, if their souls’ strings are finely tuned, arrive with much less effort at the feeling of eternity; for everything we do, they may dream. And precisely where our deeds end, theirs begin to bear fruit. – Rainer Maria Rilke • Architects in urban planning are talking about this but they’re not talking about it yet I don’t think at that level that [Buckminster] Fuller is talking about when he talked about putting a dome over Manhattan, which is to say an attempt at integrating all of these different technologies in a way that makes for a city that, without having an actual dome, thermodynamically manages the heat flow for that urban environment and therefore makes it so that it is a highly efficient machine for a living or a dwelling machine as he would have preferred in terms of thermodynamically optimizing it. – Jonathon Keats • Are you an action-oriented, take-charge person interested in exciting new challenges? As director of a major public-sector organization, you will manage a large armed division and interface with other senior executives in a team-oriented, multinational initiative in the global marketplace. Successful candidate will have above-average oral-presentation skills – Winston Churchill
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'Manage', orderby: 'rand', order: 'DESC', template: '1', limit: '68', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_manage').html(obj); jQuery('#thelovesof_manage img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); ); • Basically, managing is about influencing action. Managing is about helping organizations and units to get things done, which means action. Sometimes, managers manage actions directly. They fight fires. They manage projects. They negotiate contracts. – Henry Mintzberg • But one thing that we have done in the last four years is we have really put pressure on the leadership of this organization [Al Qaeda]. We have killed a significant number of leaders. We’ve captured others. Those that remain have to look over their shoulders, they have to be on the run. So that even if we don’t manage to kill or capture them all within four years, what we do do is put the kind of pressure on them that makes them focus on their own skins, as opposed to carrying out attacks. – Michael Chertoff • By far the hardest decision I’ve had to manage [was about my health]. Because I had 51 years of doing it wrong. – John C. Maxwell • By raising tall trees for windbreaks, citrus underneath, and a green manure cover down on the surface, I have found a way to take it easy and let the orchard manage itself! – Masanobu Fukuoka • Capitalism is the only engine credible enough to generate mass wealth. I think it’s imperfect, but we’re stuck with it. And thank God we have that in the toolbox. But if you don’t manage it in some way that incorporates all of society, if everybody’s not benefiting on some level and you don’t have a sense of shared purpose, national purpose, then it’s just a pyramid scheme. – David Simon • CEOs are no different than the guy in the mailroom. They all have to learn how to manage better the risk created by our increasingly risk-shifting world. – Lewis Schiff • Certainly, if you can’t manage your game, you can’t play tournament golf. You continually have to ask yourself what club to play, where to aim it, whether to accept a safe par or to try to go for a birdie. You can’t play every hole the same way. I never could. – Ben Hogan • Checklists are really helpful ways to remind people around how to manage complicated tasks. – Scott D. Anthony • Deal with just the basic fact: we will never have enough money for lawyers for poor people. So one of our major initiatives has been to develop new technologies that can help people without a lawyer navigate the legal system, and help sort the cases that really need to have a lawyer from those where an individual with some help online, may be able to manage by him or herself. – Martha Minow • Dictatorial regimes often manage to keep themselves in power because they are recognized by foreigners as representing the state and its people, and therefore as entitled to sell the country’s natural resources and to borrow money in its people’s name. These privileges conferred by foreigners keep autocrats in power despite the fact that they were not elected and do not rule in the interest of the population. – Thomas Pogge • Donald Trump has stated that his three older children will manage his business once he enters office. – Rachel Martin • Donald Trump is a – the owner of a lot of real estate that he manages, he may well pay no income taxes. We know for a fact that he didn’t pay any income taxes in 1978, 1979, 1984, 1992 and 1994. We know because of the reports of the New Jersey Casino Control Commission. We don’t know about any year after that. – Hillary Clinton • Donald Trump manages to personalize everything. He brings chaos. He will not admit that he’s ever made a mistake, that he’s ever been wrong. – Mark Shields • Drug addiction is an incredibly difficult challenge to manage on one’s own. When I think of all the stories I’ve heard from people, the common denominator is that they all were ultimately able to find somebody who was willing to support them. Maybe it was someone they knew, like a parent or a sibling or a friend; other times it was a treatment center with a compassionate staff who didn’t give up on them. That made all the difference. – Vivek Murthy • Earning a lot of money is not the key to prosperity. How you handle it is. – Dave Ramsey • Egypt’s priorities in fact are all related to the environment: food, water, health, energy, employment and education. Egypt is facing some very serious environmental challenges. The country has limited natural resources and has to decide how to manage these to meet the needs of a growing population. – Mindy Baha El Din • Either you run the day or the day runs you. – Jim Rohn • Every time I’ve gone to Brazil I’ve gotten sick upon return. You know, it’s just a different situation there. And I take every precaution – eating cooked foods and staying away from tap water, brushing my teeth with bottled water – and yet I still manage to get sick. So I’m just going to stay on point, bring my probiotics. – Kerri Walsh • Everybody wants to manage me; management is a touchy situation. – Boi-1da • Everything considered, a determined soul will always manage. – Albert Camus • For me, what was important was to record everything I saw around me, and to do this as methodically as possible. In these circumstances a good photograph is a picture that comes as close as possible to reality. But the camera never manages to record what your eyes see, or what you feel at the moment. The camera always creates a new reality. – Alfredo Jaar • For those of us who worry more about working people than about windfall profits for oil companies, it may net out. A better question is: what does it do to our economy if we manage to overheat the earth? This summer’s drought provides a small taste. – Bill McKibben • Freedom is the slogan which speaks to the ears of people who feel strong enough to manage on their own using their own resources, who can do without dependency because they can do without others caring for them. – Zygmunt Bauman • Generally I still believe that Lewis [Hamilton] is the best champion that we have had in a long, long time. He manages to get to all different walks of life: red carpet, fashion business, and music – you name it. – Bernie Ecclestone • Good design successfully manages the tensions between user needs, technology feasibility, and business viability. – Tim Brown • Google has already tested robot cars in San Francisco. If they can navigate San Francisco, they can probably manage just about anywhere. – Norman Foster • Harvard has something that manages, I think, to provide a lot of options for students, but still fairly prescriptive about the kinds of subjects that the courses ought to cover. – Louis Menand • Having inborn capabilities doesn’t matter. Whether you can manage them or not, that’s what determines the victory or defeat. – Hong Jin-joo • History reports that the men who can manage men manage the men who can manage only things, and the men who can manage money manage all. – Will Durant • However, we need to participate and manage skillfully, helpfully, and harmoniously, for a better world, family and society to be possible. So everybody’s spiritual by nature I believe, not that they necessarily have to be religious. Everybody wants, or cares about, and has values even if they don’t talk about them all the time explicitly, like some noisy preachers do with their foghorn voices and dogmatic views. – Surya Das • Humans are really interesting. We’re so clever, what we do with our brain. How we manage to con ourselves into thinking all sorts of things is really fascinating. By the same token, if we could just convince ourselves of things that would gather us together and powerfully turn things around for the good, that would be awesome. It’s doubtful because we’re such a fear-based species. – Thandie Newton • I always tried to manage my money smart. – Rakim • I am inspired by working moms. Mothers who somehow balance the demands of their many lives – professional, familial, personal, and interior – and still manage to make time to have fun and invest in themselves! This is a huge challenge that I look forward to taking on. – Daphne Oz • I believe in the not-too-distant future, people are going to learn to trust their information to the Net more than they now do, and be able to essentially manage very large amounts and perhaps their whole lifetime of information in the Net with the notion that they can access it securely and privately for as long as they want, and that it will persist over all the evolution and technical changes. – Robert E. Kahn • I can’t manage without homeopathy. In fact, I never go anywhere without homeopathic remedies. – Paul McCartney • I care deeply about Democratic party and our agenda and making sure that we can continue to build on President [Barack] Obama`s legacy. So any suggestion that I am doing anything other than manage this primary impartially and neutrally is ludicrous. – Hillary Clinton • I continued blogging, but between illness and deadlines, did not manage to blog nearly as much as last year. I’m hoping to do better in 2016. – Justine Larbalestier • I didn’t have to do too much “research” or acting to play this guy. (laughs) It is actually very difficult to manage all the time. The Community schedule is crushing and it kills me because I don’t get to be with my family as much as I’d like. – Joel McHale • I do try to be of some use in the world. I sometimes do volunteer work with kids, and manage to help some people a little, but really making a significant difference can be hard. – John Shirley • I don’t have a lot of time for managing [my businesses], so I put a lot of trust in people I hire to manage my businesses. I can’t necessarily attend to [the businesses] while I’m in season. We swap ideas on how we can improve and deliver a better product. – Kamerion Wimbley • I don’t have too many pests. My concept is this: I manage myself, and there’s nothing wrong with people having managers. – Vickie Winans • I don’t think she ever had a single initiative at the United Nations that was not previously [vetted] by the people at the State Department, approved of, and authorized. She did manage to get around the world an awful lot, and find other parts of her vast slum project that needed repair. But I don’t think that that was the main point. The main point was that she, after all, connoted Franklin Roosevelt, who by then was long dead, and had a certain prestige and power on that account. – William A. Rusher • I had a horrible life habit that I had to change. And I think it’s very true, the later we make decisions in life that are important, the harder it is to manage those decisions. – John C. Maxwell • I had never written about what it’s like to live the life of a writer, and I had never read a book that combined talking about the life of writing and how you can do it, how you can stand it, how you can emotionally manage it, with the choices that we all make on the page. – Alice Mattison • I have a seven-level program and through even into the fifth level it can be all done from a distance. “Why not?” is how I feel about it, because energy is not confined by time or space, so why should my teaching be. I’m teaching energy and how to manage it, how to handle it, and how to heal with it. – Deborah King • I have found, without a doubt, that when I manage to get outside myself and not make myself the center, I’m always taken care of in whatever situation I’m in, even if I’m slow to recognize it. It’s counterintuitive thinking on some level and not consistently easy to do. – Patrick Fabian • I have to kind of like switch heads. Sometimes I manage it seamlessly, and other times I feel rather all over the place. I feel a bit schizophrenic, like I have a split personality. – Emma Watson • I know a lot of people in Washington would say, well, you know, indigent people can’t manage their health savings account. They’re too stupid. But they’re not too stupid. Somebody has a diabetic foot ulcer, they learn very quickly not to go the emergency room where it costs five times more to take care of it. They go to the clinic. – Benjamin Carson • I no longer think that learning how to manage people, especially subordinates, is the most important for executives to learn. I am teaching above all else, how to manage oneself. – Peter Drucker • I remember once reading that it is still not understood how the giraffe manages to pump an adequate blood supply all the way up to its head; but it is hard to imagine that anyone would conclude tht giraffes do not have long necks. At least not anyone who had ever been to a zoo – Robert Solow • I said, I’ll put on weight. And I started having massages, taking cod-liver oil, and eating twice as much. But I didn’t even gain an ounce. I’d made up my mind that on the day the engagement was announced I’d be fatter, and I didn’t gain an ounce. Then I went to Mussoorie, which is a health resort, and I ignored the doctors’ instructions; I invented my own regime and gained weight. Just the opposite of what I’d like now. Now I have the problem of keeping slim. Still I manage. I don’t know if you realize I’m a determined woman. – Indira Gandhi • I say the elite looks out of touch because it’s kind of saying; look we’ll manage all this for you. You know, we know best. We’ll sort it all out for you. And then because people believe that doesn’t meet their case for change and they want real change, social media and the way the relationship between people can come into a sense of belonging very quickly, that then is itself a revolutionary phenomenon. You see this around the world. – Tony Blair • I say this ironically, not because I favor the State, but because people are not in the state of mind right now where they feel that they can manage themselves. We have to go through an educational process – which does not involve, in my opinion, compromises with the State. But if the State disappeared tomorrow by accident, and the police disappeared and the army disappeared and the government agencies disappeared, the ironical situation is that people would suddenly feel denuded. – Murray Bookchin • I say, make the decision, and as soon as you make the decision, the rest of your life you just manage that decision on a daily basis. – John C. Maxwell • I talk about my daily dozen in the book [ Today Matters]. Twelve things that are certainly attainable by any of us that we need to manage every day. – John C. Maxwell • I think a lot of women are incredibly tough and they’re just really admirable. Especially the way that, given what they’ve got, they just manage to carry on. – Jo Brand • I think being able to sit in the shoes of a woman and being able to manage products that are mostly sold to women, alongside a lot of female employees, is really helpful because you hold that empathy to the situation. You can understand where the customer is coming from. – Maureen Chiquet • I think everybody plays a role in their own aging. Some people accelerate it. Some people slow it down. Some people manage to reverse it. It all depends on how much you are invested in the hypnosis of our social condition. So if you believe that at a certain age you have to die and you become dysfunctional, then you will. – Deepak Chopra • I think I may drop dead on the stage someday. I hate to think of it. But it’s getting tough on me, the travel. The show, I somehow manage to rise up to it, you know. But I have no desire to retire. – Hal Holbrook • I think Pep Guardiola is a top manager. There’s no doubt about that. Not only did he manage Messi and Iniesta, but he made them better and took them to levels they’d never been before. The best team I’ve ever seen is Pep Guardiola’s Barcelona. I’m sure his management got something to do with that.- Jamie Carragher • I think you learn about yourself through experiences – as many of them as you can manage. – Bonnie Fuller • I want as many people to see the show [Hamilton] in its musical theater form as possible before it’s translated, and whether it’s a good act of translation or a bad act of translation, it’s a leap, and very few stage shows manage the leap successfully. – Lin-Manuel Miranda • I wanted to get that scholarship to – a division one scholarship and play ball and go to school for free. And that, to me, was – I was always about getting to that next step. If I could get to that next place, then I could figure out essentially what to do with being in that space and how to manage my time and handle those – handle all the benefits of being in that space in a way that would get me to the next place. – Mahershala Ali • I was just shitty, shitty, shitty with money and I finally, when I really started making money, I had to get somebody to sit down with me and learn how to manage my money. – Miriam Shor • I would say, you have a unique chance of learning more about the game of chess with your computer than Bobby Fischer, or even myself, could manage throughout our entire lives. What is very important is that you will use this power productively and you will not be hijacked by the computer screen. Always keep your personality intact. – Garry Kasparov • I write for anybody struggling to manage their money. – Michelle Singletary • I`m 100 percent impartial. I`m – my responsibility is to manage this primary nominating contest neutrally and fairly. – Hillary Clinton • If America is to compete effectively in world markets, its corporate leaders must strategically position their companies in the right businesses, and then manage their workforces in the right ways. However, the nation has a shortage of business leaders who understand the importance of utilizing human capital to gain competitive advantage, let alone the know-how to do so. In the future, that shortcoming promises to be exacerbated because few business schools today teach aspiring executives how to create the kind of high-involvement organizations. – James O’Toole • If democracy is ever to be threatened, it will not be by revolutionary groups burning government offices and occupying the broadcasting and newspaper offices of the world. It will come from disenchantment, cynicism and despair caused by the realisation that the New World Order means we are all to be managed and not represented. – Tony Benn • If I can learn how to manage myself, why would I give you 20 percent and people are looking for me? It just doesn’t make sense. – Vickie Winans • If we manage to last in spite of everything, it is because our infirmities are so many and so contradictory that they cancel each other out. – Emile M. Cioran • If we offer a prize, so to speak, to anyone who manages to bring a country under his physical control – namely, that they can then sell the country’s resources and borrow in its name – then it’s not surprising that generals or guerrilla movements will want to compete for this prize. But that the prize is there is really not the fault of the insiders. It is the fault of the dominant states and of the system of international law they maintain. – Thomas Pogge • If Wes Anderson has a very strong cast, he can direct the minutia of that story and still manage to have something that lives and breathes. – Susan Sarandon • If you are not consciously directing your life, you will lose your footing and circumstances will decide for you. – Michael Beckwith • If you have a strong business idea, then it is comparatively easy now to get capital. It is a positive thing that increasingly more people want to join the startup bandwagon. However, to build a successful business, focus on creating more value through the product, and direct your efforts on solving real issues. If you manage to build a sustainable product, revenue will follow. A lot of startups fail because they concentrate on incremental innovations, increasing user base, and monetisation before strengthening the core of their business. – Bhavin Turakhia • If you never allow your children to exceed what they can do, how are they ever going to manage adult life – where a lot of it is managing more than you thought you could manage? – Ellen Galinsky • If you pick the right people and give them the opportunity to spread their wings and put compensation as a carrier behind it you almost don’t have to manage them. – Jack Welch • If you want to lead a family/team/organization, learn to lead/manage yourself first. – Bradford Winters • If you want to manage somebody, manage yourself. Do that well and you’ll be ready to stop managing. And start leading. – Mark Gonzales • I’m not a great fan of people who suddenly manage to pull out the whole track sounding perfect from a laptop. That doesn’t feel like any kind of show to me. – Thighpaulsandra • I’m pretty cerebral, so I can occasionally rationalize emotional pain away, but when I can’t, that’s when I start to feel the fire inside take over and somehow manage to power through. – Nathan Parsons • I’m so blessed with my Baby. […] I just want the most normal life possible for him. […] I will manage. I will create that. – Britney Spears • I’m suggesting that principles meant to deal with uncertainty that occurs naturally can be useful to manage the uncertainty that characterizes any new idea. – Scott D. Anthony • I’m working from home a lot. That’s very unusual because I’m away a lot, sometimes working on the other side of the world for long periods of time. So, it’s hard to manage in the sense that I want to be the best dad I can be but it’s almost harder when you have your kids outside the door. – Andy Serkis • In a corporate context, companies have to try very hard to oppose the enticements of conventional wisdom. They must aim for the leaps, which means that companies have to do more than simply manage their knowledge, which is composed of the insights and understandings they already know. They also have to manage the knowledge-generation process. It’s not just about, “Oh, we’re going to create a data warehouse and we are going to invent a computerized filing system to get at all the stuff we know.” – John Kao • In a growing number of states, you’re actually expected to pay back the costs of your imprisonment. Paying back all these fees, fines, and costs may be a condition of your probation or parole. To make matters worse, if you’re one of the lucky few who actually manages to get a job following release from prison, up to 100% of your wages can be garnished to pay back all those fees, fines and court costs. One hundred percent. – Michelle Alexander • In a world where the 2 billionth photograph has been uploaded to Flickr, which looks like an Eggleston picture! How do you deal with making photographs with the tens of thousands of photographs being uploaded to Facebook every second, how do you manage that? How do you contribute to that? What’s the point? – Alec Soth • In the book [Today Matters] I talk about successful people make important decisions early in their life, and then they manage those decisions the rest of their life. – John C. Maxwell • In The Deep End, you have a woman who looks like a J. Crew mother who can manage it all. Then we begin to realize what’s going on inside. Every time I see one of those women stuck at a stoplight with the children in the back of her car, I sort of think, “What have you just done? What’s going on in your life?”. – Tilda Swinton • In trying to address the systemic problem of racial injustice, we would do well to look at abolitionism, because here is a movement of radicals who did manage to effect political change. Despite things that radical movements always face, differences and divisions, they were able to actually galvanize the movement and translate it into a political agenda. – Manisha Sinha • Iraqi Kurds, out of desperate necessity, have forged one of the most watchful and vigilant anti-terrorist communities in the world. Terrorists from elsewhere just can’t operate in that kind of environment. Al Qaeda members who do manage to infiltrate are hunted down like rats. This conservative Muslim society did a better job protecting me from Islamist killers than the U.S. military could do in the Green Zone in Baghdad. – Michael Totten • Isn’t it fascinating that Nazis always manage to adopt the word freedom? – Steig Larsson • It is no exaggeration to say that rising inequality has driven many of the 99 percent into a financial ditch. It also helped spawn the housing bubble that gave us the financial crisis of 2008, the lingering effects of which have forced many OWS protesters to try to launch their careers in by far the most inhospitable labor market we’ve seen since the Great Depression. Even those recent graduates who manage to find jobs will suffer a lifelong penalty in reduced wages. – Robert H. Frank • It is well known that you can only manage what you measure, and as this is the job of professional accountants, it means they have huge influence on companies’ governance. – Kofi Annan • It would be horrible to be micro-managed! I don’t think directors can really micro-manage people. It’s just impossible. – Janusz Kaminski • It’s all matter of attitude. You could let a lot of things bother you if you wanted to But it’s pretty much the same anywhere you go, you can manage. – Haruki Murakami • It’s also so cool to be able to develop the talent to be able to jump and control the motorcycle which is a very fun thing to do but it’s hard to manage the two. It’s so easy to get hurt, and that’s the last thing I want to do. – Jeff Hardy • It’s difficult to feel silly and depressed at the same time, but I manage. – Dov Davidoff • It’s important to know how to lead and manage a classroom with flexibility. Students of all ages are quite capable of learning these routines and contributing to their success once the teacher is comfortable guiding students in that direction. – Carol Ann Tomlinson • It’s important to wake up everyday and remind yourself what you’re working towards. You create your own life, it’s not set out there for you. – Shay Mitchell • It’s like learning to fall properly. If you can manage not to tighten up you won’t hurt yourself as much. The same theory applies to your day, physically and emotionally. The tensions simply can’t take hold. – Diane von Furstenberg • It’s the people that ultimately are less talented or have less confidence in what they’re doing that then try to micro-manage, which lends itself to a less than ideal film. – Ari Graynor • Just listen to what Mr. [Donald] Trump has to say and make your own judgment with respect to how confident you feel about his ability to manage things like our nuclear triad. – Barack Obama • Let me just say you could end this violence within a very short period of time, have a complete ceasefire – which Iran could control, which Russia could control, which Syria could control, and which we and our coalition friends could control – if one man would merely make it known to the world that he doesn’t have to be part of the long-term future; he’ll help manage Syria out of this mess and then go off into the sunset, as most people do after a period of public life. If he were to do that, then you could stop the violence and quickly move to management. – John F. Kerry • Liberating is a gay word, so let’s phrase it this way: I know everything about me and still manage to be good friends with myself, so nothing anyone says that’s truthful about me ever bothers me. – Jim Goad • Like any working mother, I have to balance and manage my time very carefully. My children and husband come first, of course, then my work. – Andrea Davis Pinkney • Look at the history of the printing press, when this was invented what sort of consequences this had. Or industrialization, what sort of consequences that had. Very often, it led to enormous transformational processes within individual societies. And it took awhile until societies learned how to find the right kind of policies to contain this and manage and steer this. – Angela Merkel • Manage the dream: Create a compelling vision, one that takes people to a new place, and then translate that vision into a reality. – Warren G. Bennis • Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall. – Stephen Covey • Managing brands is going to be more and more about trying to manage everything that your company does. – Lee Clow • Managing risk is a key variable, frankly, all aspects of life, business is just one of them, and one of the things that most people do in terms of managing risk, that’s actually bad thinking, is they think they can manage risk to zero. Everything has some risk to it. You know, you drive your car down the street, a drunk driver may hit you. So what you’re doing is you’re actually trying to get to an acceptable level of risk. – Reid Hoffman • Many people who gain recognition and fame shape their lives by overcoming seemingly insurmountable obstacles, only to be catapulted into new social realities over which they have less control and manage badly. Indeed, the annals of the famous and infamous are strewn with individuals who were both architects and victims of their life courses. – Albert Bandura • Margaret Thatcher – a woman I greatly admire – once said that she was not content to manage the decline of a great nation. Neither am I. I am prepared to lead the resurgence of a great nation. – Carly Fiorina • Michelle Obama is a powerful example of someone who has learned how to align her actions with her values, manage boundaries across domains of life, and embrace change courageously. – Stewart D. Friedman • Money is a big part of your life, and when you learn how to get your finances under control, all areas of your life will soar. – T. Harv Eker • More than print and ink, a newspaper is a collection of fierce individualists who somehow manage to perform the astounding daily miracle of merging their own personalities under the discipline of the deadline and retain the flavor of their own minds in print. – Arthur Ochs Sulzberger • My belief is that there will be very large numbers of Internet-enabled devices on the Net – home appliances, office equipment, things in the car and maybe things that you carry around. And since they’re all on the Internet and Internet-enabled, they’ll be manageable through the network, and so we’ll see people using the Net and applications on the Net to manage their entertainment systems, manage their, you know, office activities and maybe even much of their social lives using systems on the Net that are helping them perform that function. – Vinton Cerf • My daughters have strong personalities. I’m close to them but they don’t really need me to advise them on how to manage their lives and they don’t ask me to do that. – Bernie Ecclestone • My occupation has been a great deal with David Foster Wallace, and he didn’t manage it, and he was very much looking for something that isn’t totally selfish, and finding meaning. It’s a struggle. – Tom Courtenay • n truth, we don’t know a whole lot of what Simeon North did. He did manage to match John Hall’s ability to make interchangeable parts, but it’s not clear how much of that came from Hall and how much was original with North. – Charles R. Morris • Now each race is different every time because it’s a different journey to get to it – the difficulties you faced getting the car into that position. I manage myself. I chose my team myself. So there’s a huge satisfaction for me. – Lewis Hamilton • Now we’re in a very different economy. Throughout the late 1980s and 1990s American management started to do the right things. There was extraordinary investment in technology. The dominant questions now are less how to do it better, how to manage better, how to make the economy better, than how to have fuller and more meaningful lives. Because the irony is, now that we’ve come through this great transition, even though our organizations and our people are extraordinarily productive, many feel that the nonwork side of life is very thin. – Robert Reich • Now what I do is I manage that decision. And I teach them in the book how – know what decision to make and then how to manage those decisions. It’s a very – it’s a personal growth book [Today Matters]; that’s what it is. – John C. Maxwell • Now, the situation is much worse in Indonesia than 10 years ago. It is because then, there was still some hope. The progressive Muslim leader Abdurrahman Wahid, was alive and so was Pramoedya Ananta Toer. Mr Wahid, a former President of Indonesia, was a closet Socialist. He was deposed by a judicial coup constructed by the Indonesian elites and military, but many Indonesians still believed that he would manage to make a comeback. – Andre Vltchek • Nowadays, we have to deal with so many more factors that weren’t there in the past. It’s not enough to be a good rider, if you want to finish at the front. The riders have become incredible athletes. In the past, you could manage the race and fight only on the last laps. Now you need to train hard. You cannot allow yourself to go on track without being at 100 percent. – Valentino Rossi • Of course some people manage to write books really young and publish really young. But for most writers, it takes several years because you have to apprentice yourself to the craft, and you also have to grow up. I think maturity is connected to one’s ability to write well. – Cheryl Strayed • One of the most difficult things is to get truthful people. Nobody can manage well if they don’t have a lot of mirrors around them that are honest, that tell them what they’re doing is wrong or wrongheaded or misconceived. And in every large bureaucracy on earth, most people are afraid to tell the boss the truth. – Robert Reich • Oppressors do not get to be oppressors in a single sweep. They manage it because little by little, we make them that. We overlook too much in the beginning and wonder why we lost control in the end. – Joan D. Chittister • Our conscious minds are rapidly overwhelmed with the few tasks that they attempt to manage. That’s why our unconscious minds have evolved to handle so much of our thinking. – Nick Morgan • Our government is operating within an unprecedented revenue shortfall and that we have an obligation to all citizens of the province to manage our finances responsibly. And that’s what we’re going to do. – Rachel Notley • People always ask, “How do you get in the mind of the teen reader?” I think all human beings have these common threads. We struggle with the same things. We desire love and attachment. We have to sort out how much we want to be attached and be independent, how we manage need and being needed and being hurt. These are things that begin when we’re – how old? Then in those teen years we start to really feel them. – Deb Caletti • People are looking for some means of control and what that means is is that the politics in all of our countries is gonna require us to manage technology and global integration and all these demographic shifts in a way that makes people feel more control, that gives them more confidence in their future. – Barack Obama • People seem able to love their dogs with an unabashed acceptance that they rarely demonstrate with family or friends. The dogs do not disappointment them, or, if they do, the owners manages to forget about it quickly. I want to learn to love people like this, the way I love my dog, with pride and enthusiasm and a complete amnesia for faults. In short, to love others the way my dog loves me. – Ann Patchett • People who are great thinkers, in science or in art, people who are great performers, have to have that kind of capacity. Without that kind of capacity, it’s extremely difficult to manage a high level of performance because you’re going to get a lot of extraneous material chipping away at the finery of your thinking or the finery of your motor execution. – Antonio Damasio • People who hate in concrete terms are dangerous. People who manage to hate only in abstracts are the ones worth having for your friends. – John Brunner • Photography is a great adventure in thinking and looking, a wonderful magic toy that miraculously manages to combine our adult awareness with the fairy-tale world of childhood, a never-ending journey through great and small, through variations and the realm of illusions and appearances, a labyrinthine and specular place of multitudes and simulation.- Luigi Ghirri • Practice Golden-Rule 1 of Management in everything you do. Manage others the way you would like to be managed. – Brian Tracy • Russia and the United States are the biggest nuclear powers, this leaves us with an extra special responsibility. By the way, we manage to deal with it and work together in certain fields, particularly in resolving the issue of the Iranian nuclear programme. We worked together and we achieved positive results on the whole. – Vladimir Putin • Separating is not divorcing. Please keep that in mind. It is, instead, the second step in seeing if there’s a better way to manage your family. – Carolyn Hax • So if somebody has chronic pain, we want to manage the pain, but we still want to treat the insomnia separately. So what we’ll tend to do in our sleep lab is we’ll do a thorough evaluation and we usually have myself, who is a Psychologist and a Sleep Behavioral Sleep Specialist, I treat the patients first. – Shelby Harris • So if we can’t express it or repress it, what do we do when we feel angry? The answer is to recognize the anger, but choose to respond to the situation differently. Easier said than done, right? Can you actually imagine trying to strong-arm your anger into another, more amicable feeling? It would never work. Determination alone won’t work. It takes a new intelligence to understand and manage our emotions. By getting your head and heart in coherence and allowing the heart’s intelligence to work for you, you can have a realistic chance of transforming your anger in a healthy way. – Doc Childre • So many awful things have happened in Karachi, it’s true. It has its own crazy rhythm. Even as crazy as other news is in Pakistan, the city manages to beat that in the frequency of catastrophes. – Steve Inskeep • So many of the conscious and unconscious ways men and women treat each other have to do with romantic and sexual fantasies that are deeply ingrained, not just in society but in literature. The women’s movement may manage to clean up the mess in society, but I don’t know whether it can ever clean up the mess in our minds. – Nora Ephron • Someday there is going to be a book about a middle-aged man with a good job, a beautiful wife and two lovely children who still manages to be happy. – Bill Vaughan • Someday, when I manage to finally figure out how to take care of myself, then I’ll consider taking care of someone else. – Marilyn Manson • South Africa now needs skilled and educated people to say ‘How do we manage and develop this democratic country?’ – Thabo Mbeki • Take the self-driving car and the smartphone and put those together and think about how to manage a smart grid because suddenly you have all of this data coming from those two mechanisms that allow for a much higher level of allocating energy much more efficiently. – Jonathon Keats • Take your life in your own hands, and what happens? A terrible thing: no one to blame. – Erica Jong • That’s a rather flippant quote “drinking and writing bad poetry” from me. I mean, I said it, but I was doing other stuff too. I certainly didn’t manage the full stretch of four years. – Dylan Moran • That’s where I got the idea to paint the walls of the gallery with varied colours [at the Whitechapel show]. I tried to figure out how all these Renaissance paintings manage to work together. – Nan Goldin • The best people know that there are two phases in every crisis: the one where you manage it and the other where you learn from it. To succeed you have to do both – Mark McCormack • The building housing America’s military brass is a five-sided pentagon, but somehow, the people in it still manage to make it the squarest place on earth. The latest evidence? A current military document that lists homosexuality as a mental disorder in the same league as mental retardation – noting, of course, the one difference: retarded people can still get into heaven. – Jon Stewart • The challenge is to manage creative people so that the output is fruitful. The challenge is not to have an open environment and simply let them do whatever they want. – John Kao • The city is better because the city has an economy of needs and once you’re talking about a city, maybe you can start talking about how you manage the climate of that city as a whole. Not by putting a dome over it but by more passive means that can potentially be put together in creative ways. – Jonathon Keats • The conventional definition of management is getting work done through people, but real management is developing people through work. – Agha Hasan Abedi • The divide between me and the modern world is growing further because I to a larger degree manage to rid myself of my dependence on the modern world. If the modern world collapsed tomorrow I would be fine, and I see so many others who would not be. – Varg Vikernes • The emerging church movement has come to believe that the ultimate context of the spiritual aspirations of a follower of Jesus Christ is not Christianity but rather the kingdom of God. … to believe that God is limited to it would be an attempt to manage God. If one holds that Christ is confined to Christianity, one has chosen a god that is not sovereign. Soren Kierkegaard argued that the moment one decides to become a Christian, one is liable to idolatry. – Samir Selmanovic • The fastest growing segment of the population in the world right now is over the age of 90, and in some cases over the age of 100 in some countries. So people are living longer. And even though much of it is attributed to modern medicine, it’s not. It’s lifestyle. It’s nutrition. It’s the quality of exercise, the ability to manage stress. – Deepak Chopra • The Germans take quite a knock for the holocaust, but the Catholic church manages to push more people into death, disease, and degradation every year than the holocaust managed in its entire show. And it’s thought rather crass to even mention the fact. It seems to me that as long as these Catholic bishops can show their face in public that we are in complicity with mass murder. – Terence McKenna • The idea that the United States of American might shut down its government over abortion and funding to an organization that is 0.01% of the U.S. budget seems completely insane. Anyone looking at this debate around the world is thinking ‘What is this country doing? They have three wars going on, they’re trying to manage major problems and they’re thinking of shutting down their government over abortion?’ – Katty Kay • The job of the president of the United States is not to love his wife; it’s to manage a wide range of complicated issues. – Matthew Yglesias • The madman theory can work, but it only works if it’s strategic. And I think one of the problems that President Trump faces is people don’t really know how much strategy is here and how much is he just sort of talking off the top of his head. And I think North Korea is a really classic case of a potentially insoluble problem, a problem that you have to manage. – E. J. Dionne • The majority of short term trading results are just random. In the long term the money ends up with those that can trade and manage risk. – Steve Burns • The manager does things right; the leader does the right thing. – Warren G. Bennis • The number one key to success in life is to master your own state. If you can manage and master your states, there’s nothing you can’t do. – Tony Robbins • The odd thing is that Trump’s hand movements don’t seem to coordinate with the topic at hand. Most pols manage to make their hand movements correspond with the message, so a slash will accompany emphasis, etc. Trump’s got about three moves, the most notable of which is his “okay” gesture, making a circle with his thumb and forefinger. Anyway, Trump has only a few gestures, including that one, and to my eye he uses them seemingly indiscriminately. I’ve seen him use the “okay/f.u.” sign to be pedantic. – Gene Weingarten • The one thing you can do for others is the manage your own life. And do it with conviction. – Tony Robbins • The person that takes over needs to have the skills to manage that … I believe Andrea [Leadsom] has the edge. – Iain Duncan Smith • The question arose, how would the communities manage this land on their own. That’s why the Communal Land Rights Bill then borrows an institution that is set up in terms of the role and function and powers of the institutional traditional leadership ( borrows that committee and uses that committee). – Thabo Mbeki • The signs of outstanding leadership appear primarily among the followers. Are the followers reaching their potential? Are they learning? Serving? Do they achieve the required results? Do they change with – grace? Manage conflict? – Max De Pree • The silliest woman can manage a clever man; but it needs a very clever woman to manage a fool. – Rudyard Kipling • The stability of the rate is the main issue and the Central Bank manages to ensure it one way or another. This was finally achieved after the Central Bank switched to a floating national currency exchange rate. – Vladimir Putin • The State is a professional apparatus that sets itself apart from the people and apart from the institutions that the people themselves create. It’s a monopoly on violence that manages and institutionalizes social activities. The people are perfectly capable of managing themselves and creating their own institutions. – Murray Bookchin • The thing about Hitchcock is that, however much one dissects him, he still manages to hang onto his mystery. You can never quite get to the bottom of him. – Julian Jarrold • The traditional model for a company like Coca-Cola is to hire one big advertising agency and essentially outsource all of its creativity in that area. But Coca-Cola does not do it that way. It knows how to manage creative people and creative teams and it has been quite adept at building a network that includes the Creative Artists Agency in Hollywood, which is a talent agency. – John Kao • The way in which we manage the business of getting and spending is closely tied to our personal philosophy of living. We begin to develop this philosophy long before we have our first dollar to spend; and unless we are thinking people, our attitude toward money management may continue through the years to be tinged with the ignorance and innocence of childhood. – Catherine Crook de Camp • There are a lot of actors who are doing dream work where they focus on a role and try to bring it into their dreams. I haven’t done that work, but I’ve always found that when I’m studying for a role, the work I’m doing somehow manages to enter my dreams, no matter what approach I take. – Luke Kirby • There are fewer and fewer philosophies that everyone subscribes to. We don’t seem to have as many beliefs in common as we used to. Also, we interact much more online. We have all these gadgets to help us manage different aspects of our lives. – Elaine Equi • There are so many items that are not in the copyright domain. And people might not realize the Library of Congress manages the copyright process for the nation. – Carla Hayden • There are still many, many uncertainties, challenges and difficulties in Afghanistan. But we have to enable the Afghans to manage those challenges themselves. We cannot solve all the problems for the Afghans. – Jens Stoltenberg • There is no doubt that we need to manage migration better.Migrants are always getting the blame for politicians. – Sadiq Khan • There is the fact that – people have had a lot of confidence that the Chinese leadership could fix what is wrong with their economy so it wouldn’t have ripple effects around the world. I think that confidence is being shaken by how difficult it is for them to manage their stock market and their currency. – David Wessel • There must be a very clear understanding that you cannot work for peace if you are not ready to struggle. And this is the very meaning of jihad: to manage your intention to get your inner peace when it comes to the spiritual journey. In our society, that means face injustice and hypocrisy, face the dictators, the exploiters, the oppressors if you want to free the oppressed, if you want peace based on justice. – Tariq Ramadan • Therefore, when you see the end result, it’s difficult to see who’s the director, me or them. Ultimately, everything belongs to the actors – we just manage the situation. – Abbas Kiarostami • There’s a reductiveness to photography, of course – in the framing of reality and the exclusion of chunks of it (the rest of the world, in fact). It’s almost as if the act of photography bears some relationship to how we consciously manage the uncontrollable set of possibilities that exist in life. – Philip-Lorca diCorcia • There’s always going to be a tradeoff between trolling and anonymity, and I guess that’s the way life will be. And you can manage it, but you can’t cure it. – Tim Wu • There’s not much room for deviation, yet if you manage to crack it, there then you can express things that actually do sound unique and genuinely original. – Rob Brown • These New York City streets get colder, I shoulder every burden every disadvantage I’ve learned to manage. I don’t have a gun to brandish. I walk these streets famished. – Lin-Manuel Miranda • They [people from the Donald Trump cabinet] haven’t had experience in the areas that they’re being asked to manage in a very complicated world and a very complicated government. – Claire McCaskill • This and the small sample size inevitably leads to stereotypes – sweeping family sagas from India, ‘lush’ colonial romances from South-East Asia. Mother and daughter reconciling generational differences through preparing a ‘traditional’ meal together. Geishas. And even if something more exciting does manage to sneak through, it gets the same insultingly clichéd cover slapped on it anyway, so no one will ever know. – Deborah Smith • Those who are not schooled and practised in truth [who are not honest and upright men] can never manage aright the government, nor yet can those who spend their lives as closet philosophers; because the former have no high purpose to guide their actions, while the latter keep aloof from public life. – Plato • Time can’t be managed. I merely manage activities. Each night, I write down on a sheet of paper a list of the things I have to accomplish the next day. And when I wake up … I do them. – Earl Nightingale • Time is what we want most, but what we use worst. – William Penn • Time management is the key. Although it seems hectic, as long as you manage your time properly you can get everything done. – John Cena • To manage our emotions is not to drug them or suppress them, but to understand them so that we can intelligently direct our emotional energies and intentions…. It’s time for human beings to grow up emotionally, to mature into emotionally managed and responsible citizens. No magic pill will do it. – Doc Childre • Too much of the income gains go to too few people, even though all of the stakeholders worked together to make their companies successful. By failing to put enough income into more hands, the GDP grows slower and consumers manage to meet their needs by incurring high levels of debt. – Philip Kotler • Trying to please everyone can be very hard, but, like Shrek or The Simpsons, Robin Hood manages to entertain adults and children at the same time, but in different ways. – Richard Armitage • Until we can manage time, we can manage nothing else. – Peter Drucker • Virtue is the master of talent, talent is the servant of virtue. Talent without virtue is like a house where there is no master and their servant manages its affairs. How can there be no mischief? – Zicheng Hong • We almost manage to forget that things happen that we don’t anticipate. – Anna Quindlen • We are never really in control. We just think we are when things happen to be going our way. – Byron Katie • We are pretty tough in saying for example if you’ve got unsecured debts and less than £25,000 that should not be an excuse for repossessing someone’s home.That should not be allowed.You have got to help manage people through this process. I don’t want to pretend that it is going to be easy getting out of Gordon Brown’s hole. – George Osborne • We can easily manage if we will only take, each day, the burden appointed to it. But the load will be too heavy for us if we carry yesterday’s burden over again today, and then add the burden of the morrow before we are required to bear it. – John Newton • We get brilliant results from average people managing brilliant processes – while our competitors get average or worse results from brilliant people managing broken processes – Fujio Cho • We need to learn how to love each other. If we cannot do that, then we need to learn to respect one another. If we can’t manage to do that, then we must learn to tolerate each other. – Yanni • We tend to think of orphans as being the protagonist of stories we read when we’re kids, and yet here you are: you’re an adult, you’re supposed to manage, you’re supposed to get over it, you’re supposed to go on with your life, and you feel like a lost child. – Sandra Cisneros • Well advice people have told me that is that, “If people aren’t suing you, you haven’t made it,” which I don’t necessarily believe but with greater success comes greater responsibility and being one of the few female entrepreneurs who I think has been as public as I have been, you’re definitely under a spotlight. It’s difficult to manage. – Sophia Amoruso • What I love about Coulson is that he manages to do that and he manages to wrangle the diva superheroes, and really keep a sense of humor about it. And, you can tell that he really loves his job. – Clark Gregg • What is a good man? Simply one whose life is useful to the world. And a bad man is simply one whose life is harmful to others. There are, however, those who are harmful and yet enjoy a good reputation, and who manage to profit by a show of usefulness. These are the worst of all. – Zhang Zhao • What we face is a comprehensive contraction of our activities, due to declining fossil fuel resources and other growing scarcities. Our failure is the failure to manage contraction. It requires a thoroughgoing reorganization of daily life. No political faction currently operating in the USA gets this. Hence, it is liable to be settled by a contest for dwindling resources and there are many ways in which this won’t be pretty. – James Howard Kunstler • When a novelist manages to describe or evoke something you thought or felt, without realizing that other people also found themselves in the same situation and had the same feelings, it creates that same solidarity. Maybe it’s better to think of humor not as a tool to express the solidarity, but a kind of by-product. Maybe the realization “I’m not on my own on this one” is always, or often, funny. – Elif Batuman • When I manage to keep my center, it’s usually because I’ve taken prayer seriously. – Jonathan Jackson • When it comes to trying to manage how our entire planet-wide market and all the people and businesses in it deal with nature and our natural resources – we first and foremost need to change the incentives. – Ramez Naam • When you are wanting to comfort someone in their grief take the words ‘at least’ out of your vocabulary. In saying them you minimise someone else’s pain…Don’t take someone else’s grief and try to put it in a box that YOU can manage. Learn to truly grieve with others for as long as it may take. – Kay Warren • When you manage to express something with a look and the music instead of saying it with words or having the character speak, I think it’s a more complete work. – Sergio Leone • Whenever I go to New York I try to soak up as much live music as I can, including as many nights at the opera as I can manage. – Garth Greenwell • Whores have the ability to put up with behaviors other women would never manage to put up with. That’s why we deserve to be generously compensated. – Annie Sprinkle • With just a little education and practice on how to manage your emotions, you can move into a new experience of life so rewarding that you will be motivated to keep on managing your emotional nature in order to sustain it. The payoff is delicious in terms of improved quality of life. – Doc Childre • Without change there is no innovation, creativity, or incentive for improvement. Those who initiate change will have a better opportunity to manage the change that is inevitable. – William Pollard • Women are the real superheroes because they’re not just working. They have a life and everything. I’m super lucky because I come home and I don’t have to run errands and clean the house and do all that. Some women have all of this to do, too. And they manage and they live longer. How we do that, I don’t know. – Vanessa Paradis • World events do not occur by accident. They are made to happen, whether it is to do with national issues or commerce; and most of them are staged and managed by those who hold the purse strings. – Denis Healey • Writing is a form of therapy; sometimes I wonder how all those who do not write, compose or paint can manage to escape the madness, melancholia, the panic and fear which is inherent in a human situation. – Graham Greene • You cannot manage a decision you haven’t made. – John C. Maxwell • You can’t grow long-term if you can’t eat short-term. Anybody can manage short. Anybody can manage long. Balancing those two things is what management is. – Jack Welch • You can’t manage [country] the way you would manage a family business. – Barack Obama • You can’t manage creativity. You need to manage for creativity. You need to create the space for it to emerge. – Arianna Huffington • You can’t really micro-manage. You’ll never make the movie in 52 days, if you micro-manage. If you do that, you take the creativity away from people because people just really quickly become disinterested when they’re always being told how to do it. – Janusz Kaminski • You have a job but you don’t always have job security, you have your own home but you worry about mortgage rates going up, you can just about manage but you worry about the cost of living and the quality of the local school because there is no other choice for you.rankly, not everybody in Westminster understands what it’s like to live like this and some need to be told that it isn’t a game. – Theresa May • You have to learn to deal with your own, for want of a better word, insecurities, fears. They don’t go away. And that’s normal. It’s human. You don’t ever really want to lose that. What you want to do is learn to manage it and to work with yourself. But there’s a part of you that has anticipation and fear. And so the important thing to know is that there’s nothing wrong with that and that that’s normal. You have to learn how to deal with it, certainly, but it doesn’t keep you from doing it. And that doesn’t go away ever. – Annette Bening • You know how some people will say to writers, “Why don’t you just write a romance novel that sells a bunch of copies and then you’ll have the money to do the kind of writing you want to do”? I always say that I don’t have the skills or knowledge to do that. It would be just as hard for me to do that kind of writing as it would be to learn how to do any number of productive careers that I can’t manage to make myself do. – Lucy Corin • You manage things and lead people. – Grace Hopper • You manage things, you lead people. We went overboard on management and forgot about leadership. It might help if we ran the MBAs out of Washington. – Grace Hopper • You must manage yourself before you can lead someone else. – Zig Ziglar • You’re directing a movie, but you are at the head of a ship of people, a whole fleet of people. And being able to manage that – being able to handle yourself as a director being a leader – that’s massively important. – Idris Elba • Your vision will be clearer only when you manage to see within your heart. – Carl Jung • You’re faced with creation, you’re faced with something very mysterious and very mystical, whether it’s looking at the ocean or being alone in a forest, or sometimes looking at the stars. There’s really something very powerful about nature that’s endlessly mysterious and a reminder of our humanity, our mortality, of more existential things that we usually manage to not get involved with very often because of daily activity. – Shirin Neshat
[clickbank-storefront-bestselling]
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'a', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_a').html(obj); jQuery('#thelovesof_a img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'e', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_e').html(obj); jQuery('#thelovesof_e img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'i', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_i').html(obj); jQuery('#thelovesof_i img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'o', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_o').html(obj); jQuery('#thelovesof_o img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'u', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_u').html(obj); jQuery('#thelovesof_u img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'y', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_y').html(obj); jQuery('#thelovesof_y img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
0 notes
equitiesstocks · 5 years
Text
Manage Quotes
Official Website: Manage Quotes
(adsbygoogle = window.adsbygoogle || []).push();
• A busy person is usually the most efficient because they know how to manage their time. That’s something I learned through dancing all through school and all throughout my life. – Lindsay Arnold • A lot of bands are going out and playing for nothing. A lot of bands will go out and get paid, but the gas tank will eat up their paycheck. When they manage to sell a t-shirt or two, there is a little bit of leftover money there so that they don’t have to have McDonalds that day. They can actually eat something decent with possibly a bit of cash leftover. It’s a huge part of the business now. – Matt Snell • A novel may take anywhere from two to five years to write and, in the end, you might manage a couple of thousand dollars on it, no more. – Mordecai Richler • A very strong player can manage and can just know how to manage a thousand positions. I get it; it’s a very arbitrary number. So then you have the world champion who could do more. But, again, any increase in numbers creates, sort of, a new level of playing. And then you go to the very top, and the difference is so minimal, but it does exist. So even a few players who never became world champion, like Vassily Ivanchuk, for instance, I think they belong to the same category. – Garry Kasparov • Actually, I don’t get to do it (watch 5 or so news shows) every day, but I manage to do it at least 5 times a week. And the rest of the time I’m doing interviews. I do an amazing amount of interviews. – Frank Zappa • AI’s ability to recognize visual categories and images is now pretty close to what human beings can manage, and probably better than a lot of people’s, actually. AI can have more knowledge of detailed categories, like animals and so on. – Stuart J. Russell • All you really have when you’re acting is the confidence and your ability to manage and tell a story by creating a character. – Billy Crudup • Almost all human who can form a sentence will eventually let you in on the fact that their lives are very difficult and sometimes very hard to manage. – Henry Rollins • And a united Europe will also manage to send hundreds of thousands of migrants, who don’t have the right to asylum, back to their homelands. Though that, given the number of flights necessary, would be of a scale reminiscent of the Berlin Airlift. – Paolo Gentiloni • And one of the things I find most moving is the way people with infirmities manage to embrace Life, and from the cool flowers by the wayside reach conclusions about the vast splendour of its great gardens. They can, if their souls’ strings are finely tuned, arrive with much less effort at the feeling of eternity; for everything we do, they may dream. And precisely where our deeds end, theirs begin to bear fruit. – Rainer Maria Rilke • Architects in urban planning are talking about this but they’re not talking about it yet I don’t think at that level that [Buckminster] Fuller is talking about when he talked about putting a dome over Manhattan, which is to say an attempt at integrating all of these different technologies in a way that makes for a city that, without having an actual dome, thermodynamically manages the heat flow for that urban environment and therefore makes it so that it is a highly efficient machine for a living or a dwelling machine as he would have preferred in terms of thermodynamically optimizing it. – Jonathon Keats • Are you an action-oriented, take-charge person interested in exciting new challenges? As director of a major public-sector organization, you will manage a large armed division and interface with other senior executives in a team-oriented, multinational initiative in the global marketplace. Successful candidate will have above-average oral-presentation skills – Winston Churchill
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'Manage', orderby: 'rand', order: 'DESC', template: '1', limit: '68', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_manage').html(obj); jQuery('#thelovesof_manage img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); ); • Basically, managing is about influencing action. Managing is about helping organizations and units to get things done, which means action. Sometimes, managers manage actions directly. They fight fires. They manage projects. They negotiate contracts. – Henry Mintzberg • But one thing that we have done in the last four years is we have really put pressure on the leadership of this organization [Al Qaeda]. We have killed a significant number of leaders. We’ve captured others. Those that remain have to look over their shoulders, they have to be on the run. So that even if we don’t manage to kill or capture them all within four years, what we do do is put the kind of pressure on them that makes them focus on their own skins, as opposed to carrying out attacks. – Michael Chertoff • By far the hardest decision I’ve had to manage [was about my health]. Because I had 51 years of doing it wrong. – John C. Maxwell • By raising tall trees for windbreaks, citrus underneath, and a green manure cover down on the surface, I have found a way to take it easy and let the orchard manage itself! – Masanobu Fukuoka • Capitalism is the only engine credible enough to generate mass wealth. I think it’s imperfect, but we’re stuck with it. And thank God we have that in the toolbox. But if you don’t manage it in some way that incorporates all of society, if everybody’s not benefiting on some level and you don’t have a sense of shared purpose, national purpose, then it’s just a pyramid scheme. – David Simon • CEOs are no different than the guy in the mailroom. They all have to learn how to manage better the risk created by our increasingly risk-shifting world. – Lewis Schiff • Certainly, if you can’t manage your game, you can’t play tournament golf. You continually have to ask yourself what club to play, where to aim it, whether to accept a safe par or to try to go for a birdie. You can’t play every hole the same way. I never could. – Ben Hogan • Checklists are really helpful ways to remind people around how to manage complicated tasks. – Scott D. Anthony • Deal with just the basic fact: we will never have enough money for lawyers for poor people. So one of our major initiatives has been to develop new technologies that can help people without a lawyer navigate the legal system, and help sort the cases that really need to have a lawyer from those where an individual with some help online, may be able to manage by him or herself. – Martha Minow • Dictatorial regimes often manage to keep themselves in power because they are recognized by foreigners as representing the state and its people, and therefore as entitled to sell the country’s natural resources and to borrow money in its people’s name. These privileges conferred by foreigners keep autocrats in power despite the fact that they were not elected and do not rule in the interest of the population. – Thomas Pogge • Donald Trump has stated that his three older children will manage his business once he enters office. – Rachel Martin • Donald Trump is a – the owner of a lot of real estate that he manages, he may well pay no income taxes. We know for a fact that he didn’t pay any income taxes in 1978, 1979, 1984, 1992 and 1994. We know because of the reports of the New Jersey Casino Control Commission. We don’t know about any year after that. – Hillary Clinton • Donald Trump manages to personalize everything. He brings chaos. He will not admit that he’s ever made a mistake, that he’s ever been wrong. – Mark Shields • Drug addiction is an incredibly difficult challenge to manage on one’s own. When I think of all the stories I’ve heard from people, the common denominator is that they all were ultimately able to find somebody who was willing to support them. Maybe it was someone they knew, like a parent or a sibling or a friend; other times it was a treatment center with a compassionate staff who didn’t give up on them. That made all the difference. – Vivek Murthy • Earning a lot of money is not the key to prosperity. How you handle it is. – Dave Ramsey • Egypt’s priorities in fact are all related to the environment: food, water, health, energy, employment and education. Egypt is facing some very serious environmental challenges. The country has limited natural resources and has to decide how to manage these to meet the needs of a growing population. – Mindy Baha El Din • Either you run the day or the day runs you. – Jim Rohn • Every time I’ve gone to Brazil I’ve gotten sick upon return. You know, it’s just a different situation there. And I take every precaution – eating cooked foods and staying away from tap water, brushing my teeth with bottled water – and yet I still manage to get sick. So I’m just going to stay on point, bring my probiotics. – Kerri Walsh • Everybody wants to manage me; management is a touchy situation. – Boi-1da • Everything considered, a determined soul will always manage. – Albert Camus • For me, what was important was to record everything I saw around me, and to do this as methodically as possible. In these circumstances a good photograph is a picture that comes as close as possible to reality. But the camera never manages to record what your eyes see, or what you feel at the moment. The camera always creates a new reality. – Alfredo Jaar • For those of us who worry more about working people than about windfall profits for oil companies, it may net out. A better question is: what does it do to our economy if we manage to overheat the earth? This summer’s drought provides a small taste. – Bill McKibben • Freedom is the slogan which speaks to the ears of people who feel strong enough to manage on their own using their own resources, who can do without dependency because they can do without others caring for them. – Zygmunt Bauman • Generally I still believe that Lewis [Hamilton] is the best champion that we have had in a long, long time. He manages to get to all different walks of life: red carpet, fashion business, and music – you name it. – Bernie Ecclestone • Good design successfully manages the tensions between user needs, technology feasibility, and business viability. – Tim Brown • Google has already tested robot cars in San Francisco. If they can navigate San Francisco, they can probably manage just about anywhere. – Norman Foster • Harvard has something that manages, I think, to provide a lot of options for students, but still fairly prescriptive about the kinds of subjects that the courses ought to cover. – Louis Menand • Having inborn capabilities doesn’t matter. Whether you can manage them or not, that’s what determines the victory or defeat. – Hong Jin-joo • History reports that the men who can manage men manage the men who can manage only things, and the men who can manage money manage all. – Will Durant • However, we need to participate and manage skillfully, helpfully, and harmoniously, for a better world, family and society to be possible. So everybody’s spiritual by nature I believe, not that they necessarily have to be religious. Everybody wants, or cares about, and has values even if they don’t talk about them all the time explicitly, like some noisy preachers do with their foghorn voices and dogmatic views. – Surya Das • Humans are really interesting. We’re so clever, what we do with our brain. How we manage to con ourselves into thinking all sorts of things is really fascinating. By the same token, if we could just convince ourselves of things that would gather us together and powerfully turn things around for the good, that would be awesome. It’s doubtful because we’re such a fear-based species. – Thandie Newton • I always tried to manage my money smart. – Rakim • I am inspired by working moms. Mothers who somehow balance the demands of their many lives – professional, familial, personal, and interior – and still manage to make time to have fun and invest in themselves! This is a huge challenge that I look forward to taking on. – Daphne Oz • I believe in the not-too-distant future, people are going to learn to trust their information to the Net more than they now do, and be able to essentially manage very large amounts and perhaps their whole lifetime of information in the Net with the notion that they can access it securely and privately for as long as they want, and that it will persist over all the evolution and technical changes. – Robert E. Kahn • I can’t manage without homeopathy. In fact, I never go anywhere without homeopathic remedies. – Paul McCartney • I care deeply about Democratic party and our agenda and making sure that we can continue to build on President [Barack] Obama`s legacy. So any suggestion that I am doing anything other than manage this primary impartially and neutrally is ludicrous. – Hillary Clinton • I continued blogging, but between illness and deadlines, did not manage to blog nearly as much as last year. I’m hoping to do better in 2016. – Justine Larbalestier • I didn’t have to do too much “research” or acting to play this guy. (laughs) It is actually very difficult to manage all the time. The Community schedule is crushing and it kills me because I don’t get to be with my family as much as I’d like. – Joel McHale • I do try to be of some use in the world. I sometimes do volunteer work with kids, and manage to help some people a little, but really making a significant difference can be hard. – John Shirley • I don’t have a lot of time for managing [my businesses], so I put a lot of trust in people I hire to manage my businesses. I can’t necessarily attend to [the businesses] while I’m in season. We swap ideas on how we can improve and deliver a better product. – Kamerion Wimbley • I don’t have too many pests. My concept is this: I manage myself, and there’s nothing wrong with people having managers. – Vickie Winans • I don’t think she ever had a single initiative at the United Nations that was not previously [vetted] by the people at the State Department, approved of, and authorized. She did manage to get around the world an awful lot, and find other parts of her vast slum project that needed repair. But I don’t think that that was the main point. The main point was that she, after all, connoted Franklin Roosevelt, who by then was long dead, and had a certain prestige and power on that account. – William A. Rusher • I had a horrible life habit that I had to change. And I think it’s very true, the later we make decisions in life that are important, the harder it is to manage those decisions. – John C. Maxwell • I had never written about what it’s like to live the life of a writer, and I had never read a book that combined talking about the life of writing and how you can do it, how you can stand it, how you can emotionally manage it, with the choices that we all make on the page. – Alice Mattison • I have a seven-level program and through even into the fifth level it can be all done from a distance. “Why not?” is how I feel about it, because energy is not confined by time or space, so why should my teaching be. I’m teaching energy and how to manage it, how to handle it, and how to heal with it. – Deborah King • I have found, without a doubt, that when I manage to get outside myself and not make myself the center, I’m always taken care of in whatever situation I’m in, even if I’m slow to recognize it. It’s counterintuitive thinking on some level and not consistently easy to do. – Patrick Fabian • I have to kind of like switch heads. Sometimes I manage it seamlessly, and other times I feel rather all over the place. I feel a bit schizophrenic, like I have a split personality. – Emma Watson • I know a lot of people in Washington would say, well, you know, indigent people can’t manage their health savings account. They’re too stupid. But they’re not too stupid. Somebody has a diabetic foot ulcer, they learn very quickly not to go the emergency room where it costs five times more to take care of it. They go to the clinic. – Benjamin Carson • I no longer think that learning how to manage people, especially subordinates, is the most important for executives to learn. I am teaching above all else, how to manage oneself. – Peter Drucker • I remember once reading that it is still not understood how the giraffe manages to pump an adequate blood supply all the way up to its head; but it is hard to imagine that anyone would conclude tht giraffes do not have long necks. At least not anyone who had ever been to a zoo – Robert Solow • I said, I’ll put on weight. And I started having massages, taking cod-liver oil, and eating twice as much. But I didn’t even gain an ounce. I’d made up my mind that on the day the engagement was announced I’d be fatter, and I didn’t gain an ounce. Then I went to Mussoorie, which is a health resort, and I ignored the doctors’ instructions; I invented my own regime and gained weight. Just the opposite of what I’d like now. Now I have the problem of keeping slim. Still I manage. I don’t know if you realize I’m a determined woman. – Indira Gandhi • I say the elite looks out of touch because it’s kind of saying; look we’ll manage all this for you. You know, we know best. We’ll sort it all out for you. And then because people believe that doesn’t meet their case for change and they want real change, social media and the way the relationship between people can come into a sense of belonging very quickly, that then is itself a revolutionary phenomenon. You see this around the world. – Tony Blair • I say this ironically, not because I favor the State, but because people are not in the state of mind right now where they feel that they can manage themselves. We have to go through an educational process – which does not involve, in my opinion, compromises with the State. But if the State disappeared tomorrow by accident, and the police disappeared and the army disappeared and the government agencies disappeared, the ironical situation is that people would suddenly feel denuded. – Murray Bookchin • I say, make the decision, and as soon as you make the decision, the rest of your life you just manage that decision on a daily basis. – John C. Maxwell • I talk about my daily dozen in the book [ Today Matters]. Twelve things that are certainly attainable by any of us that we need to manage every day. – John C. Maxwell • I think a lot of women are incredibly tough and they’re just really admirable. Especially the way that, given what they’ve got, they just manage to carry on. – Jo Brand • I think being able to sit in the shoes of a woman and being able to manage products that are mostly sold to women, alongside a lot of female employees, is really helpful because you hold that empathy to the situation. You can understand where the customer is coming from. – Maureen Chiquet • I think everybody plays a role in their own aging. Some people accelerate it. Some people slow it down. Some people manage to reverse it. It all depends on how much you are invested in the hypnosis of our social condition. So if you believe that at a certain age you have to die and you become dysfunctional, then you will. – Deepak Chopra • I think I may drop dead on the stage someday. I hate to think of it. But it’s getting tough on me, the travel. The show, I somehow manage to rise up to it, you know. But I have no desire to retire. – Hal Holbrook • I think Pep Guardiola is a top manager. There’s no doubt about that. Not only did he manage Messi and Iniesta, but he made them better and took them to levels they’d never been before. The best team I’ve ever seen is Pep Guardiola’s Barcelona. I’m sure his management got something to do with that.- Jamie Carragher • I think you learn about yourself through experiences – as many of them as you can manage. – Bonnie Fuller • I want as many people to see the show [Hamilton] in its musical theater form as possible before it’s translated, and whether it’s a good act of translation or a bad act of translation, it’s a leap, and very few stage shows manage the leap successfully. – Lin-Manuel Miranda • I wanted to get that scholarship to – a division one scholarship and play ball and go to school for free. And that, to me, was – I was always about getting to that next step. If I could get to that next place, then I could figure out essentially what to do with being in that space and how to manage my time and handle those – handle all the benefits of being in that space in a way that would get me to the next place. – Mahershala Ali • I was just shitty, shitty, shitty with money and I finally, when I really started making money, I had to get somebody to sit down with me and learn how to manage my money. – Miriam Shor • I would say, you have a unique chance of learning more about the game of chess with your computer than Bobby Fischer, or even myself, could manage throughout our entire lives. What is very important is that you will use this power productively and you will not be hijacked by the computer screen. Always keep your personality intact. – Garry Kasparov • I write for anybody struggling to manage their money. – Michelle Singletary • I`m 100 percent impartial. I`m – my responsibility is to manage this primary nominating contest neutrally and fairly. – Hillary Clinton • If America is to compete effectively in world markets, its corporate leaders must strategically position their companies in the right businesses, and then manage their workforces in the right ways. However, the nation has a shortage of business leaders who understand the importance of utilizing human capital to gain competitive advantage, let alone the know-how to do so. In the future, that shortcoming promises to be exacerbated because few business schools today teach aspiring executives how to create the kind of high-involvement organizations. – James O’Toole • If democracy is ever to be threatened, it will not be by revolutionary groups burning government offices and occupying the broadcasting and newspaper offices of the world. It will come from disenchantment, cynicism and despair caused by the realisation that the New World Order means we are all to be managed and not represented. – Tony Benn • If I can learn how to manage myself, why would I give you 20 percent and people are looking for me? It just doesn’t make sense. – Vickie Winans • If we manage to last in spite of everything, it is because our infirmities are so many and so contradictory that they cancel each other out. – Emile M. Cioran • If we offer a prize, so to speak, to anyone who manages to bring a country under his physical control – namely, that they can then sell the country’s resources and borrow in its name – then it’s not surprising that generals or guerrilla movements will want to compete for this prize. But that the prize is there is really not the fault of the insiders. It is the fault of the dominant states and of the system of international law they maintain. – Thomas Pogge • If Wes Anderson has a very strong cast, he can direct the minutia of that story and still manage to have something that lives and breathes. – Susan Sarandon • If you are not consciously directing your life, you will lose your footing and circumstances will decide for you. – Michael Beckwith • If you have a strong business idea, then it is comparatively easy now to get capital. It is a positive thing that increasingly more people want to join the startup bandwagon. However, to build a successful business, focus on creating more value through the product, and direct your efforts on solving real issues. If you manage to build a sustainable product, revenue will follow. A lot of startups fail because they concentrate on incremental innovations, increasing user base, and monetisation before strengthening the core of their business. – Bhavin Turakhia • If you never allow your children to exceed what they can do, how are they ever going to manage adult life – where a lot of it is managing more than you thought you could manage? – Ellen Galinsky • If you pick the right people and give them the opportunity to spread their wings and put compensation as a carrier behind it you almost don’t have to manage them. – Jack Welch • If you want to lead a family/team/organization, learn to lead/manage yourself first. – Bradford Winters • If you want to manage somebody, manage yourself. Do that well and you’ll be ready to stop managing. And start leading. – Mark Gonzales • I’m not a great fan of people who suddenly manage to pull out the whole track sounding perfect from a laptop. That doesn’t feel like any kind of show to me. – Thighpaulsandra • I’m pretty cerebral, so I can occasionally rationalize emotional pain away, but when I can’t, that’s when I start to feel the fire inside take over and somehow manage to power through. – Nathan Parsons • I’m so blessed with my Baby. […] I just want the most normal life possible for him. […] I will manage. I will create that. – Britney Spears • I’m suggesting that principles meant to deal with uncertainty that occurs naturally can be useful to manage the uncertainty that characterizes any new idea. – Scott D. Anthony • I’m working from home a lot. That’s very unusual because I’m away a lot, sometimes working on the other side of the world for long periods of time. So, it’s hard to manage in the sense that I want to be the best dad I can be but it’s almost harder when you have your kids outside the door. – Andy Serkis • In a corporate context, companies have to try very hard to oppose the enticements of conventional wisdom. They must aim for the leaps, which means that companies have to do more than simply manage their knowledge, which is composed of the insights and understandings they already know. They also have to manage the knowledge-generation process. It’s not just about, “Oh, we’re going to create a data warehouse and we are going to invent a computerized filing system to get at all the stuff we know.” – John Kao • In a growing number of states, you’re actually expected to pay back the costs of your imprisonment. Paying back all these fees, fines, and costs may be a condition of your probation or parole. To make matters worse, if you’re one of the lucky few who actually manages to get a job following release from prison, up to 100% of your wages can be garnished to pay back all those fees, fines and court costs. One hundred percent. – Michelle Alexander • In a world where the 2 billionth photograph has been uploaded to Flickr, which looks like an Eggleston picture! How do you deal with making photographs with the tens of thousands of photographs being uploaded to Facebook every second, how do you manage that? How do you contribute to that? What’s the point? – Alec Soth • In the book [Today Matters] I talk about successful people make important decisions early in their life, and then they manage those decisions the rest of their life. – John C. Maxwell • In The Deep End, you have a woman who looks like a J. Crew mother who can manage it all. Then we begin to realize what’s going on inside. Every time I see one of those women stuck at a stoplight with the children in the back of her car, I sort of think, “What have you just done? What’s going on in your life?”. – Tilda Swinton • In trying to address the systemic problem of racial injustice, we would do well to look at abolitionism, because here is a movement of radicals who did manage to effect political change. Despite things that radical movements always face, differences and divisions, they were able to actually galvanize the movement and translate it into a political agenda. – Manisha Sinha • Iraqi Kurds, out of desperate necessity, have forged one of the most watchful and vigilant anti-terrorist communities in the world. Terrorists from elsewhere just can’t operate in that kind of environment. Al Qaeda members who do manage to infiltrate are hunted down like rats. This conservative Muslim society did a better job protecting me from Islamist killers than the U.S. military could do in the Green Zone in Baghdad. – Michael Totten • Isn’t it fascinating that Nazis always manage to adopt the word freedom? – Steig Larsson • It is no exaggeration to say that rising inequality has driven many of the 99 percent into a financial ditch. It also helped spawn the housing bubble that gave us the financial crisis of 2008, the lingering effects of which have forced many OWS protesters to try to launch their careers in by far the most inhospitable labor market we’ve seen since the Great Depression. Even those recent graduates who manage to find jobs will suffer a lifelong penalty in reduced wages. – Robert H. Frank • It is well known that you can only manage what you measure, and as this is the job of professional accountants, it means they have huge influence on companies’ governance. – Kofi Annan • It would be horrible to be micro-managed! I don’t think directors can really micro-manage people. It’s just impossible. – Janusz Kaminski • It’s all matter of attitude. You could let a lot of things bother you if you wanted to But it’s pretty much the same anywhere you go, you can manage. – Haruki Murakami • It’s also so cool to be able to develop the talent to be able to jump and control the motorcycle which is a very fun thing to do but it’s hard to manage the two. It’s so easy to get hurt, and that’s the last thing I want to do. – Jeff Hardy • It’s difficult to feel silly and depressed at the same time, but I manage. – Dov Davidoff • It’s important to know how to lead and manage a classroom with flexibility. Students of all ages are quite capable of learning these routines and contributing to their success once the teacher is comfortable guiding students in that direction. – Carol Ann Tomlinson • It’s important to wake up everyday and remind yourself what you’re working towards. You create your own life, it’s not set out there for you. – Shay Mitchell • It’s like learning to fall properly. If you can manage not to tighten up you won’t hurt yourself as much. The same theory applies to your day, physically and emotionally. The tensions simply can’t take hold. – Diane von Furstenberg • It’s the people that ultimately are less talented or have less confidence in what they’re doing that then try to micro-manage, which lends itself to a less than ideal film. – Ari Graynor • Just listen to what Mr. [Donald] Trump has to say and make your own judgment with respect to how confident you feel about his ability to manage things like our nuclear triad. – Barack Obama • Let me just say you could end this violence within a very short period of time, have a complete ceasefire – which Iran could control, which Russia could control, which Syria could control, and which we and our coalition friends could control – if one man would merely make it known to the world that he doesn’t have to be part of the long-term future; he’ll help manage Syria out of this mess and then go off into the sunset, as most people do after a period of public life. If he were to do that, then you could stop the violence and quickly move to management. – John F. Kerry • Liberating is a gay word, so let’s phrase it this way: I know everything about me and still manage to be good friends with myself, so nothing anyone says that’s truthful about me ever bothers me. – Jim Goad • Like any working mother, I have to balance and manage my time very carefully. My children and husband come first, of course, then my work. – Andrea Davis Pinkney • Look at the history of the printing press, when this was invented what sort of consequences this had. Or industrialization, what sort of consequences that had. Very often, it led to enormous transformational processes within individual societies. And it took awhile until societies learned how to find the right kind of policies to contain this and manage and steer this. – Angela Merkel • Manage the dream: Create a compelling vision, one that takes people to a new place, and then translate that vision into a reality. – Warren G. Bennis • Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall. – Stephen Covey • Managing brands is going to be more and more about trying to manage everything that your company does. – Lee Clow • Managing risk is a key variable, frankly, all aspects of life, business is just one of them, and one of the things that most people do in terms of managing risk, that’s actually bad thinking, is they think they can manage risk to zero. Everything has some risk to it. You know, you drive your car down the street, a drunk driver may hit you. So what you’re doing is you’re actually trying to get to an acceptable level of risk. – Reid Hoffman • Many people who gain recognition and fame shape their lives by overcoming seemingly insurmountable obstacles, only to be catapulted into new social realities over which they have less control and manage badly. Indeed, the annals of the famous and infamous are strewn with individuals who were both architects and victims of their life courses. – Albert Bandura • Margaret Thatcher – a woman I greatly admire – once said that she was not content to manage the decline of a great nation. Neither am I. I am prepared to lead the resurgence of a great nation. – Carly Fiorina • Michelle Obama is a powerful example of someone who has learned how to align her actions with her values, manage boundaries across domains of life, and embrace change courageously. – Stewart D. Friedman • Money is a big part of your life, and when you learn how to get your finances under control, all areas of your life will soar. – T. Harv Eker • More than print and ink, a newspaper is a collection of fierce individualists who somehow manage to perform the astounding daily miracle of merging their own personalities under the discipline of the deadline and retain the flavor of their own minds in print. – Arthur Ochs Sulzberger • My belief is that there will be very large numbers of Internet-enabled devices on the Net – home appliances, office equipment, things in the car and maybe things that you carry around. And since they’re all on the Internet and Internet-enabled, they’ll be manageable through the network, and so we’ll see people using the Net and applications on the Net to manage their entertainment systems, manage their, you know, office activities and maybe even much of their social lives using systems on the Net that are helping them perform that function. – Vinton Cerf • My daughters have strong personalities. I’m close to them but they don’t really need me to advise them on how to manage their lives and they don’t ask me to do that. – Bernie Ecclestone • My occupation has been a great deal with David Foster Wallace, and he didn’t manage it, and he was very much looking for something that isn’t totally selfish, and finding meaning. It’s a struggle. – Tom Courtenay • n truth, we don’t know a whole lot of what Simeon North did. He did manage to match John Hall’s ability to make interchangeable parts, but it’s not clear how much of that came from Hall and how much was original with North. – Charles R. Morris • Now each race is different every time because it’s a different journey to get to it – the difficulties you faced getting the car into that position. I manage myself. I chose my team myself. So there’s a huge satisfaction for me. – Lewis Hamilton • Now we’re in a very different economy. Throughout the late 1980s and 1990s American management started to do the right things. There was extraordinary investment in technology. The dominant questions now are less how to do it better, how to manage better, how to make the economy better, than how to have fuller and more meaningful lives. Because the irony is, now that we’ve come through this great transition, even though our organizations and our people are extraordinarily productive, many feel that the nonwork side of life is very thin. – Robert Reich • Now what I do is I manage that decision. And I teach them in the book how – know what decision to make and then how to manage those decisions. It’s a very – it’s a personal growth book [Today Matters]; that’s what it is. – John C. Maxwell • Now, the situation is much worse in Indonesia than 10 years ago. It is because then, there was still some hope. The progressive Muslim leader Abdurrahman Wahid, was alive and so was Pramoedya Ananta Toer. Mr Wahid, a former President of Indonesia, was a closet Socialist. He was deposed by a judicial coup constructed by the Indonesian elites and military, but many Indonesians still believed that he would manage to make a comeback. – Andre Vltchek • Nowadays, we have to deal with so many more factors that weren’t there in the past. It’s not enough to be a good rider, if you want to finish at the front. The riders have become incredible athletes. In the past, you could manage the race and fight only on the last laps. Now you need to train hard. You cannot allow yourself to go on track without being at 100 percent. – Valentino Rossi • Of course some people manage to write books really young and publish really young. But for most writers, it takes several years because you have to apprentice yourself to the craft, and you also have to grow up. I think maturity is connected to one’s ability to write well. – Cheryl Strayed • One of the most difficult things is to get truthful people. Nobody can manage well if they don’t have a lot of mirrors around them that are honest, that tell them what they’re doing is wrong or wrongheaded or misconceived. And in every large bureaucracy on earth, most people are afraid to tell the boss the truth. – Robert Reich • Oppressors do not get to be oppressors in a single sweep. They manage it because little by little, we make them that. We overlook too much in the beginning and wonder why we lost control in the end. – Joan D. Chittister • Our conscious minds are rapidly overwhelmed with the few tasks that they attempt to manage. That’s why our unconscious minds have evolved to handle so much of our thinking. – Nick Morgan • Our government is operating within an unprecedented revenue shortfall and that we have an obligation to all citizens of the province to manage our finances responsibly. And that’s what we’re going to do. – Rachel Notley • People always ask, “How do you get in the mind of the teen reader?” I think all human beings have these common threads. We struggle with the same things. We desire love and attachment. We have to sort out how much we want to be attached and be independent, how we manage need and being needed and being hurt. These are things that begin when we’re – how old? Then in those teen years we start to really feel them. – Deb Caletti • People are looking for some means of control and what that means is is that the politics in all of our countries is gonna require us to manage technology and global integration and all these demographic shifts in a way that makes people feel more control, that gives them more confidence in their future. – Barack Obama • People seem able to love their dogs with an unabashed acceptance that they rarely demonstrate with family or friends. The dogs do not disappointment them, or, if they do, the owners manages to forget about it quickly. I want to learn to love people like this, the way I love my dog, with pride and enthusiasm and a complete amnesia for faults. In short, to love others the way my dog loves me. – Ann Patchett • People who are great thinkers, in science or in art, people who are great performers, have to have that kind of capacity. Without that kind of capacity, it’s extremely difficult to manage a high level of performance because you’re going to get a lot of extraneous material chipping away at the finery of your thinking or the finery of your motor execution. – Antonio Damasio • People who hate in concrete terms are dangerous. People who manage to hate only in abstracts are the ones worth having for your friends. – John Brunner • Photography is a great adventure in thinking and looking, a wonderful magic toy that miraculously manages to combine our adult awareness with the fairy-tale world of childhood, a never-ending journey through great and small, through variations and the realm of illusions and appearances, a labyrinthine and specular place of multitudes and simulation.- Luigi Ghirri • Practice Golden-Rule 1 of Management in everything you do. Manage others the way you would like to be managed. – Brian Tracy • Russia and the United States are the biggest nuclear powers, this leaves us with an extra special responsibility. By the way, we manage to deal with it and work together in certain fields, particularly in resolving the issue of the Iranian nuclear programme. We worked together and we achieved positive results on the whole. – Vladimir Putin • Separating is not divorcing. Please keep that in mind. It is, instead, the second step in seeing if there’s a better way to manage your family. – Carolyn Hax • So if somebody has chronic pain, we want to manage the pain, but we still want to treat the insomnia separately. So what we’ll tend to do in our sleep lab is we’ll do a thorough evaluation and we usually have myself, who is a Psychologist and a Sleep Behavioral Sleep Specialist, I treat the patients first. – Shelby Harris • So if we can’t express it or repress it, what do we do when we feel angry? The answer is to recognize the anger, but choose to respond to the situation differently. Easier said than done, right? Can you actually imagine trying to strong-arm your anger into another, more amicable feeling? It would never work. Determination alone won’t work. It takes a new intelligence to understand and manage our emotions. By getting your head and heart in coherence and allowing the heart’s intelligence to work for you, you can have a realistic chance of transforming your anger in a healthy way. – Doc Childre • So many awful things have happened in Karachi, it’s true. It has its own crazy rhythm. Even as crazy as other news is in Pakistan, the city manages to beat that in the frequency of catastrophes. – Steve Inskeep • So many of the conscious and unconscious ways men and women treat each other have to do with romantic and sexual fantasies that are deeply ingrained, not just in society but in literature. The women’s movement may manage to clean up the mess in society, but I don’t know whether it can ever clean up the mess in our minds. – Nora Ephron • Someday there is going to be a book about a middle-aged man with a good job, a beautiful wife and two lovely children who still manages to be happy. – Bill Vaughan • Someday, when I manage to finally figure out how to take care of myself, then I’ll consider taking care of someone else. – Marilyn Manson • South Africa now needs skilled and educated people to say ‘How do we manage and develop this democratic country?’ – Thabo Mbeki • Take the self-driving car and the smartphone and put those together and think about how to manage a smart grid because suddenly you have all of this data coming from those two mechanisms that allow for a much higher level of allocating energy much more efficiently. – Jonathon Keats • Take your life in your own hands, and what happens? A terrible thing: no one to blame. – Erica Jong • That’s a rather flippant quote “drinking and writing bad poetry” from me. I mean, I said it, but I was doing other stuff too. I certainly didn’t manage the full stretch of four years. – Dylan Moran • That’s where I got the idea to paint the walls of the gallery with varied colours [at the Whitechapel show]. I tried to figure out how all these Renaissance paintings manage to work together. – Nan Goldin • The best people know that there are two phases in every crisis: the one where you manage it and the other where you learn from it. To succeed you have to do both – Mark McCormack • The building housing America’s military brass is a five-sided pentagon, but somehow, the people in it still manage to make it the squarest place on earth. The latest evidence? A current military document that lists homosexuality as a mental disorder in the same league as mental retardation – noting, of course, the one difference: retarded people can still get into heaven. – Jon Stewart • The challenge is to manage creative people so that the output is fruitful. The challenge is not to have an open environment and simply let them do whatever they want. – John Kao • The city is better because the city has an economy of needs and once you’re talking about a city, maybe you can start talking about how you manage the climate of that city as a whole. Not by putting a dome over it but by more passive means that can potentially be put together in creative ways. – Jonathon Keats • The conventional definition of management is getting work done through people, but real management is developing people through work. – Agha Hasan Abedi • The divide between me and the modern world is growing further because I to a larger degree manage to rid myself of my dependence on the modern world. If the modern world collapsed tomorrow I would be fine, and I see so many others who would not be. – Varg Vikernes • The emerging church movement has come to believe that the ultimate context of the spiritual aspirations of a follower of Jesus Christ is not Christianity but rather the kingdom of God. … to believe that God is limited to it would be an attempt to manage God. If one holds that Christ is confined to Christianity, one has chosen a god that is not sovereign. Soren Kierkegaard argued that the moment one decides to become a Christian, one is liable to idolatry. – Samir Selmanovic • The fastest growing segment of the population in the world right now is over the age of 90, and in some cases over the age of 100 in some countries. So people are living longer. And even though much of it is attributed to modern medicine, it’s not. It’s lifestyle. It’s nutrition. It’s the quality of exercise, the ability to manage stress. – Deepak Chopra • The Germans take quite a knock for the holocaust, but the Catholic church manages to push more people into death, disease, and degradation every year than the holocaust managed in its entire show. And it’s thought rather crass to even mention the fact. It seems to me that as long as these Catholic bishops can show their face in public that we are in complicity with mass murder. – Terence McKenna • The idea that the United States of American might shut down its government over abortion and funding to an organization that is 0.01% of the U.S. budget seems completely insane. Anyone looking at this debate around the world is thinking ‘What is this country doing? They have three wars going on, they’re trying to manage major problems and they’re thinking of shutting down their government over abortion?’ – Katty Kay • The job of the president of the United States is not to love his wife; it’s to manage a wide range of complicated issues. – Matthew Yglesias • The madman theory can work, but it only works if it’s strategic. And I think one of the problems that President Trump faces is people don’t really know how much strategy is here and how much is he just sort of talking off the top of his head. And I think North Korea is a really classic case of a potentially insoluble problem, a problem that you have to manage. – E. J. Dionne • The majority of short term trading results are just random. In the long term the money ends up with those that can trade and manage risk. – Steve Burns • The manager does things right; the leader does the right thing. – Warren G. Bennis • The number one key to success in life is to master your own state. If you can manage and master your states, there’s nothing you can’t do. – Tony Robbins • The odd thing is that Trump’s hand movements don’t seem to coordinate with the topic at hand. Most pols manage to make their hand movements correspond with the message, so a slash will accompany emphasis, etc. Trump’s got about three moves, the most notable of which is his “okay” gesture, making a circle with his thumb and forefinger. Anyway, Trump has only a few gestures, including that one, and to my eye he uses them seemingly indiscriminately. I’ve seen him use the “okay/f.u.” sign to be pedantic. – Gene Weingarten • The one thing you can do for others is the manage your own life. And do it with conviction. – Tony Robbins • The person that takes over needs to have the skills to manage that … I believe Andrea [Leadsom] has the edge. – Iain Duncan Smith • The question arose, how would the communities manage this land on their own. That’s why the Communal Land Rights Bill then borrows an institution that is set up in terms of the role and function and powers of the institutional traditional leadership ( borrows that committee and uses that committee). – Thabo Mbeki • The signs of outstanding leadership appear primarily among the followers. Are the followers reaching their potential? Are they learning? Serving? Do they achieve the required results? Do they change with – grace? Manage conflict? – Max De Pree • The silliest woman can manage a clever man; but it needs a very clever woman to manage a fool. – Rudyard Kipling • The stability of the rate is the main issue and the Central Bank manages to ensure it one way or another. This was finally achieved after the Central Bank switched to a floating national currency exchange rate. – Vladimir Putin • The State is a professional apparatus that sets itself apart from the people and apart from the institutions that the people themselves create. It’s a monopoly on violence that manages and institutionalizes social activities. The people are perfectly capable of managing themselves and creating their own institutions. – Murray Bookchin • The thing about Hitchcock is that, however much one dissects him, he still manages to hang onto his mystery. You can never quite get to the bottom of him. – Julian Jarrold • The traditional model for a company like Coca-Cola is to hire one big advertising agency and essentially outsource all of its creativity in that area. But Coca-Cola does not do it that way. It knows how to manage creative people and creative teams and it has been quite adept at building a network that includes the Creative Artists Agency in Hollywood, which is a talent agency. – John Kao • The way in which we manage the business of getting and spending is closely tied to our personal philosophy of living. We begin to develop this philosophy long before we have our first dollar to spend; and unless we are thinking people, our attitude toward money management may continue through the years to be tinged with the ignorance and innocence of childhood. – Catherine Crook de Camp • There are a lot of actors who are doing dream work where they focus on a role and try to bring it into their dreams. I haven’t done that work, but I’ve always found that when I’m studying for a role, the work I’m doing somehow manages to enter my dreams, no matter what approach I take. – Luke Kirby • There are fewer and fewer philosophies that everyone subscribes to. We don’t seem to have as many beliefs in common as we used to. Also, we interact much more online. We have all these gadgets to help us manage different aspects of our lives. – Elaine Equi • There are so many items that are not in the copyright domain. And people might not realize the Library of Congress manages the copyright process for the nation. – Carla Hayden • There are still many, many uncertainties, challenges and difficulties in Afghanistan. But we have to enable the Afghans to manage those challenges themselves. We cannot solve all the problems for the Afghans. – Jens Stoltenberg • There is no doubt that we need to manage migration better.Migrants are always getting the blame for politicians. – Sadiq Khan • There is the fact that – people have had a lot of confidence that the Chinese leadership could fix what is wrong with their economy so it wouldn’t have ripple effects around the world. I think that confidence is being shaken by how difficult it is for them to manage their stock market and their currency. – David Wessel • There must be a very clear understanding that you cannot work for peace if you are not ready to struggle. And this is the very meaning of jihad: to manage your intention to get your inner peace when it comes to the spiritual journey. In our society, that means face injustice and hypocrisy, face the dictators, the exploiters, the oppressors if you want to free the oppressed, if you want peace based on justice. – Tariq Ramadan • Therefore, when you see the end result, it’s difficult to see who’s the director, me or them. Ultimately, everything belongs to the actors – we just manage the situation. – Abbas Kiarostami • There’s a reductiveness to photography, of course – in the framing of reality and the exclusion of chunks of it (the rest of the world, in fact). It’s almost as if the act of photography bears some relationship to how we consciously manage the uncontrollable set of possibilities that exist in life. – Philip-Lorca diCorcia • There’s always going to be a tradeoff between trolling and anonymity, and I guess that’s the way life will be. And you can manage it, but you can’t cure it. – Tim Wu • There’s not much room for deviation, yet if you manage to crack it, there then you can express things that actually do sound unique and genuinely original. – Rob Brown • These New York City streets get colder, I shoulder every burden every disadvantage I’ve learned to manage. I don’t have a gun to brandish. I walk these streets famished. – Lin-Manuel Miranda • They [people from the Donald Trump cabinet] haven’t had experience in the areas that they’re being asked to manage in a very complicated world and a very complicated government. – Claire McCaskill • This and the small sample size inevitably leads to stereotypes – sweeping family sagas from India, ‘lush’ colonial romances from South-East Asia. Mother and daughter reconciling generational differences through preparing a ‘traditional’ meal together. Geishas. And even if something more exciting does manage to sneak through, it gets the same insultingly clichéd cover slapped on it anyway, so no one will ever know. – Deborah Smith • Those who are not schooled and practised in truth [who are not honest and upright men] can never manage aright the government, nor yet can those who spend their lives as closet philosophers; because the former have no high purpose to guide their actions, while the latter keep aloof from public life. – Plato • Time can’t be managed. I merely manage activities. Each night, I write down on a sheet of paper a list of the things I have to accomplish the next day. And when I wake up … I do them. – Earl Nightingale • Time is what we want most, but what we use worst. – William Penn • Time management is the key. Although it seems hectic, as long as you manage your time properly you can get everything done. – John Cena • To manage our emotions is not to drug them or suppress them, but to understand them so that we can intelligently direct our emotional energies and intentions…. It’s time for human beings to grow up emotionally, to mature into emotionally managed and responsible citizens. No magic pill will do it. – Doc Childre • Too much of the income gains go to too few people, even though all of the stakeholders worked together to make their companies successful. By failing to put enough income into more hands, the GDP grows slower and consumers manage to meet their needs by incurring high levels of debt. – Philip Kotler • Trying to please everyone can be very hard, but, like Shrek or The Simpsons, Robin Hood manages to entertain adults and children at the same time, but in different ways. – Richard Armitage • Until we can manage time, we can manage nothing else. – Peter Drucker • Virtue is the master of talent, talent is the servant of virtue. Talent without virtue is like a house where there is no master and their servant manages its affairs. How can there be no mischief? – Zicheng Hong • We almost manage to forget that things happen that we don’t anticipate. – Anna Quindlen • We are never really in control. We just think we are when things happen to be going our way. – Byron Katie • We are pretty tough in saying for example if you’ve got unsecured debts and less than £25,000 that should not be an excuse for repossessing someone’s home.That should not be allowed.You have got to help manage people through this process. I don’t want to pretend that it is going to be easy getting out of Gordon Brown’s hole. – George Osborne • We can easily manage if we will only take, each day, the burden appointed to it. But the load will be too heavy for us if we carry yesterday’s burden over again today, and then add the burden of the morrow before we are required to bear it. – John Newton • We get brilliant results from average people managing brilliant processes – while our competitors get average or worse results from brilliant people managing broken processes – Fujio Cho • We need to learn how to love each other. If we cannot do that, then we need to learn to respect one another. If we can’t manage to do that, then we must learn to tolerate each other. – Yanni • We tend to think of orphans as being the protagonist of stories we read when we’re kids, and yet here you are: you’re an adult, you’re supposed to manage, you’re supposed to get over it, you’re supposed to go on with your life, and you feel like a lost child. – Sandra Cisneros • Well advice people have told me that is that, “If people aren’t suing you, you haven’t made it,” which I don’t necessarily believe but with greater success comes greater responsibility and being one of the few female entrepreneurs who I think has been as public as I have been, you’re definitely under a spotlight. It’s difficult to manage. – Sophia Amoruso • What I love about Coulson is that he manages to do that and he manages to wrangle the diva superheroes, and really keep a sense of humor about it. And, you can tell that he really loves his job. – Clark Gregg • What is a good man? Simply one whose life is useful to the world. And a bad man is simply one whose life is harmful to others. There are, however, those who are harmful and yet enjoy a good reputation, and who manage to profit by a show of usefulness. These are the worst of all. – Zhang Zhao • What we face is a comprehensive contraction of our activities, due to declining fossil fuel resources and other growing scarcities. Our failure is the failure to manage contraction. It requires a thoroughgoing reorganization of daily life. No political faction currently operating in the USA gets this. Hence, it is liable to be settled by a contest for dwindling resources and there are many ways in which this won’t be pretty. – James Howard Kunstler • When a novelist manages to describe or evoke something you thought or felt, without realizing that other people also found themselves in the same situation and had the same feelings, it creates that same solidarity. Maybe it’s better to think of humor not as a tool to express the solidarity, but a kind of by-product. Maybe the realization “I’m not on my own on this one” is always, or often, funny. – Elif Batuman • When I manage to keep my center, it’s usually because I’ve taken prayer seriously. – Jonathan Jackson • When it comes to trying to manage how our entire planet-wide market and all the people and businesses in it deal with nature and our natural resources – we first and foremost need to change the incentives. – Ramez Naam • When you are wanting to comfort someone in their grief take the words ‘at least’ out of your vocabulary. In saying them you minimise someone else’s pain…Don’t take someone else’s grief and try to put it in a box that YOU can manage. Learn to truly grieve with others for as long as it may take. – Kay Warren • When you manage to express something with a look and the music instead of saying it with words or having the character speak, I think it’s a more complete work. – Sergio Leone • Whenever I go to New York I try to soak up as much live music as I can, including as many nights at the opera as I can manage. – Garth Greenwell • Whores have the ability to put up with behaviors other women would never manage to put up with. That’s why we deserve to be generously compensated. – Annie Sprinkle • With just a little education and practice on how to manage your emotions, you can move into a new experience of life so rewarding that you will be motivated to keep on managing your emotional nature in order to sustain it. The payoff is delicious in terms of improved quality of life. – Doc Childre • Without change there is no innovation, creativity, or incentive for improvement. Those who initiate change will have a better opportunity to manage the change that is inevitable. – William Pollard • Women are the real superheroes because they’re not just working. They have a life and everything. I’m super lucky because I come home and I don’t have to run errands and clean the house and do all that. Some women have all of this to do, too. And they manage and they live longer. How we do that, I don’t know. – Vanessa Paradis • World events do not occur by accident. They are made to happen, whether it is to do with national issues or commerce; and most of them are staged and managed by those who hold the purse strings. – Denis Healey • Writing is a form of therapy; sometimes I wonder how all those who do not write, compose or paint can manage to escape the madness, melancholia, the panic and fear which is inherent in a human situation. – Graham Greene • You cannot manage a decision you haven’t made. – John C. Maxwell • You can’t grow long-term if you can’t eat short-term. Anybody can manage short. Anybody can manage long. Balancing those two things is what management is. – Jack Welch • You can’t manage [country] the way you would manage a family business. – Barack Obama • You can’t manage creativity. You need to manage for creativity. You need to create the space for it to emerge. – Arianna Huffington • You can’t really micro-manage. You’ll never make the movie in 52 days, if you micro-manage. If you do that, you take the creativity away from people because people just really quickly become disinterested when they’re always being told how to do it. – Janusz Kaminski • You have a job but you don’t always have job security, you have your own home but you worry about mortgage rates going up, you can just about manage but you worry about the cost of living and the quality of the local school because there is no other choice for you.rankly, not everybody in Westminster understands what it’s like to live like this and some need to be told that it isn’t a game. – Theresa May • You have to learn to deal with your own, for want of a better word, insecurities, fears. They don’t go away. And that’s normal. It’s human. You don’t ever really want to lose that. What you want to do is learn to manage it and to work with yourself. But there’s a part of you that has anticipation and fear. And so the important thing to know is that there’s nothing wrong with that and that that’s normal. You have to learn how to deal with it, certainly, but it doesn’t keep you from doing it. And that doesn’t go away ever. – Annette Bening • You know how some people will say to writers, “Why don’t you just write a romance novel that sells a bunch of copies and then you’ll have the money to do the kind of writing you want to do”? I always say that I don’t have the skills or knowledge to do that. It would be just as hard for me to do that kind of writing as it would be to learn how to do any number of productive careers that I can’t manage to make myself do. – Lucy Corin • You manage things and lead people. – Grace Hopper • You manage things, you lead people. We went overboard on management and forgot about leadership. It might help if we ran the MBAs out of Washington. – Grace Hopper • You must manage yourself before you can lead someone else. – Zig Ziglar • You’re directing a movie, but you are at the head of a ship of people, a whole fleet of people. And being able to manage that – being able to handle yourself as a director being a leader – that’s massively important. – Idris Elba • Your vision will be clearer only when you manage to see within your heart. – Carl Jung • You’re faced with creation, you’re faced with something very mysterious and very mystical, whether it’s looking at the ocean or being alone in a forest, or sometimes looking at the stars. There’s really something very powerful about nature that’s endlessly mysterious and a reminder of our humanity, our mortality, of more existential things that we usually manage to not get involved with very often because of daily activity. – Shirin Neshat
[clickbank-storefront-bestselling]
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'a', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_a').html(obj); jQuery('#thelovesof_a img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'e', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_e').html(obj); jQuery('#thelovesof_e img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'i', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_i').html(obj); jQuery('#thelovesof_i img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'o', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_o').html(obj); jQuery('#thelovesof_o img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'u', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_u').html(obj); jQuery('#thelovesof_u img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
jQuery(document).ready(function($) var data = action: 'polyxgo_products_search', type: 'Product', keywords: 'y', orderby: 'rand', order: 'DESC', template: '1', limit: '4', columns: '4', viewall:'Shop All', ; jQuery.post(spyr_params.ajaxurl,data, function(response) var obj = jQuery.parseJSON(response); jQuery('#thelovesof_y').html(obj); jQuery('#thelovesof_y img.swiper-lazy:not(.swiper-lazy-loaded)' ).each(function () var img = jQuery(this); img.attr("src",img.data('src')); img.addClass( 'swiper-lazy-loaded' ); img.removeAttr('data-src'); ); ); );
0 notes
clarenceomoore · 7 years
Text
Voices in AI – Episode 34: A Conversation with Christian Reilly
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Christian talk about AGI, AI assistants, transfer learning, ANI and more.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Voices in AI \u2013 Episode 34: A Conversation with Christian Reilly","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2018-02-19-(00-58-36)-christian-reilly.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2018\/02\/voices-facebook-card.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today our guest is Christian Riley. He is the Vice President of Global Product and Technology Strategy over at Citrix. Before joining Citrix, Riley was at Bechtel Corporation for eighteen years where he was responsible for the strategic planning, enterprise architecture and innovation program within the corporate information system and technology group. Welcome to the show, Christian.
Christian Riley: Thanks Byron, great to be here, thanks for having me.
I love to start off with a simple question, which isn’t really so simple, what is artificial intelligence? 
So, it is very interesting actually, I mean, when I think about artificial intelligence, I kind of think about it in two different ways, there is the general intelligence, which is kind of very broad and I’d suggest is a technology way of trying to re-create the human brain, and then we have this other idea and notion, which is artificial narrow intelligence, which is really about breaking down what I consider to be relatively mundane and programmable repetitive tasks that are much simpler in concept but are effective ways of either augmenting or, kind of, replacing the humans from doing certain tasks. That’s kind of the way that I like to look at.
Well, I think it’s really good. So let’s talk about those is two separate things, and let’s start with general intelligence. But before we start I want to ask you, do you believe that general intelligence is an evolutionary development from narrow AI? Like, does it get narrower or little broader, little broader, little broader, and that’s how it’s “general,” or is an AGI completely different technology, it looks completely different and we haven’t even really started building it yet?
Well, it’s a great question, Byron. The first thing to, perhaps, realize is that when we talk about AI, it’s really not that new. I think the instantiation of the current example of it is new. As the technologies have become easier to adopt and easier to consume, I think that’s given a whole new birth to the area. I guess it’s been around since the 50s and the 60s, the ideas of science fiction back then, that, you know, robots and computers would take over and think for us.
And then if you go back to Asimov and the whole of I, Robot and the very basic principles that, you know, a machine should never harm a human and those kinds of things. It feels a little bit like science fiction, but I think it’s very real. I think the “general” side of it, I don’t know whether we’ll ever really truly get to the full scope of general intelligence the way we like to think about it. Which is, effectively, that a computer or a series of computers can be programmed and learn to feel emotion and to have a conscience and those kinds of things that we have as humans by hundreds of thousands of years of evolution. The narrow thing to me seems much more like an automation angle, and I’m not sure that you would ever start with automating tasks and you suddenly become super human.
I think it’s highly likely that as humans we figure out the things that are off-loadable, if you like, to ANI that can be repeated, that can be, in fact, more efficient, more effective, and allow us to go off and think about different problems in different ways and leave that, kind of, automation element to it. So, I think they’re, kind of, two completely different things. I just have a feeling that the AGI or the general intelligence is a much broader aspect. Whether we’ll ever get there, I don’t know. I’m sure statistically we could say that computers are capable of making all of the decisions. They can add up better than we can, but they don’t understand the reason that they’re adding up, right?
So whether they’re performing a simple additive task, or they’re performing a household budget, and, say, if I have my household budget and I either can afford or can’t afford that extra bit, then there’s an emotion attached to that that computers just don’t understand.
That’s really fascinating. I think, I’ve had fifty–something guests on the show as of this taping, and I think you’re only the fifth one to say we may not be able to build a general intelligence. And that fact really surprises me that there are so few, because we don’t even understand human intelligence, we don’t understand the human mind, we don’t understand consciousness. All of these things, and yet there seems to be, at least from most of my guests, this basic assumption that we can build this and we will build it and we may build it very soon. So tell me what would be the argument that we cannot build a general intelligence, from your standpoint?
Yeah, I mean I just think there are just some things that even with the best machine learning techniques, there are so many emotional elements to the way that our brain functions, plus the fact that we’ve got these hundreds of thousands of years of evolution. There are just certain things that I don’t think it’s possible we can actually do with all the technology.
I think it’s fair to say that the whole of AGI, even though it may be conceptually fifty, sixty, or maybe even approaching seventy years old, fundamentally, my heart tells me that even the smartest robot with the best of AGI capability can’t emulate a human being. If you think about the number of things that we have to process in the context of making decisions. We’re not doing these in sequence, right, we’re kind of doing these all at once—we’re wrestling with this idea and this what if. We can look forward, and we can look backwards with experience and emotion and learning. To me, it just feels that there’s something about the human psyche that I don’t think we’ll ever replicate.
It’s interesting the roboticist Rodney Brooks says that there’s some basic fundamental thing about life that we don’t understand. He calls it the “juice.” And he says that if you put an animal in a cage, the animal is desperate to get out—it scratches and it’s getting more and more frantic. But if you put a robot in a cage, and you program it to get out it just kind of going through the motions, and that it lacks this “juice,” and we don’t really know what that juice is. So, it sounds like you think there’s some intellectual juice, some knowledge juice that we don’t understand that we have that a machine may or may not be able to have.
I think that’s a great phrase actually, the juice. I mean, I think it is absolutely that. If you were to put a human in a room, you know, the number of calculations that go through the human’s mind, and not just how I’m going to get out of here, but if I don’t then what’s the impact on my family, what’s the impact in the people that love me; there’s an emotional set of criteria that I don’t think—I mean, yeah, we can program it, of course—but I think that “juice” is something that I don’t know how we would replicate.
And, of course, with our ANI, you could argue a similar thing. Is a box or a digital assistant capable of emotion when dealing with an irate customer? That’s an interesting question. I’ve never seen evidence of it, because they’re really not programmed to do that. It’s a very small set of functions, you know. Bots are a great example of ANI for either interacting or getting recommendations, but when they give you the recommendation to the restaurant, as an example, is that based on their personal experience or is that based on coalescing all the data that they’ve been able to access and synthesize around other people’s opinions. I think, generally, if you are going to predicate something upon other people’s opinions and not your own, then I think that’s where the barrier is between ANI and AGI.
So, let’s switch lenses for a moment and talk about narrow intelligence. If somebody asked you where are we at, like how would you assess our progress in building narrow AI at this moment?
I think we’re in a great time. Again, it depends how you classify it, but I think if you take—bots are a great example—some of the popular digital assistants that are out there, whether that’s Siri, Cortana, Google, Samsung, all the big guys have made huge investments in that because they see, obviously, voice and the natural language processing, and then the machine learning that’s behind that as a key factor to engage the next generation of the human computer interface. So, I think we’re actually in pretty good shape.
Again, whether you take simple things like integrated voice responses and say, “Okay, is that really ANI or is it not?” Yeah, it’s is a form of ANI, but it’s a very small, almost like a closed loop, system that will only respond in the way that it’s programmed—so, press one for this press two for that. In a way that’s kind of a mechanism for replacing humans. But I think the things that have a much more conditional background—when you’re asking a question about where’s the best restaurant, or how should I get to the nearest tube station or what’s the best way to get from A to B—that’s really a different form of ANI. And I think that’s much more about building up the learnings, and the statistical analysis, and interpreting that in the best way that it can give you an intelligent response versus press one for this, press two for that. You are, kind of, automating it in some respects, and arguably that’s a good approach for some customer service angles, but I think when we think about the modern day digital assistants, the modern day bots, I think we’re actually making pretty good progress.
Now, the question is—and that’s okay, it’s very consumer-centric today—has that really found its way into enterprise? Certainly not that I’ve seen. I mean, there are some elements that are growing within enterprise use cases, or certain other areas of ANI that are not always about bots, of course. But I think the pin in all that is really the key to it, which is the arrival of understood machine learning techniques that are providing the algorithms that power the analysis of this data and are yielding some particularly interesting results in different areas.
Do you think it’s a mistake to personify these devices? Taking your view of these devices—and I can’t say any of their names because I have them all on my desk next to me and they’ll perk up here—Amazon has named their device, Apple named their device; they’ve given them human names. Google, interestingly, hasn’t; it’s called the Google Assistant. Do you think that it’s a mistake, and does it set false expectations if you make these things sound like people and give them names and all of that? Is it, maybe, setting the bar too high or setting them up to constantly be failing because they’re never really going to be all that great at that?
Well I guess it’s interesting to ask, do I come from a consumer angle or do I come from a business angle? So, I mean, if you think about it in the relationships that you have today, we all have nicknames for people, we all have real names of courses, but to our nearest and dearest, we all call them different names and we have different emotional attachments to those names. And if you think about it going back to some of the early robots that we saw—the Japanese have been brilliant at this, of course—over the years, they’ve always had cutesy names. So, whether you were talking to a fixed device that was, quote, “human on the other end,” or whether you were interacting with a cute robot that would do certain things when you spoke to it, I think there’s always been a need to create some kind of connection with that robot or that voice.
I think it’s pretty interesting. Where the Bixby name comes from, I don’t know, but it’s pretty interesting what Samsung did with that. Obviously we’ve got Siri, Cortana, and other things, and then Google came up with “Assistant,” as you say, so, maybe there’s a master plan from Google to be much more about business, over time, which would be kind of ironic coming from a consumer search company.
I mean, I think maybe it’s another one of these things that when you think about it in terms of potential applicability further down the line, and this is one of the things I always hold near and dear is, I can imagine this playing out in, let’s say, the facilities for the elderly as an example. Well, unfortunately, these people may be in sheltered accommodation, or whatever it is, and need to connect with somebody or something, maybe ask for some help or ask for shopping to be delivered. Wouldn’t that be great if that person felt a connection to a device, whether that device looks like a cylinder on their table or whether it’s a small robot. Maybe that, again, is part of this question around emotional support and emotional connection, which is effectively using the technology for a great result—making people feel better about the world around them.
I want to come back to that, but before we get off on another topic, you have to think that Star Wars would be different if C3PO were named Gary and R2-D2 were named Sam. You know, that’s Gary and Sam over there. 
I guess my mind immediately goes to the story of the robot in Japan that they were training to be able to navigate a mall. It was programmed so that when it came up to people, it would ask them to move, and if they didn’t move it just tried to go around them. And what happened was, kids would mess with it. They would jump in front of it when it tried to move, and then they would grow increasingly violent especially if there were multiple kids around. And so, the roboticists had to program it to say, “if you see two or more small people, i.e. children, with no large people around, then turn around and run for a large person because that will protect you from the small people.” 
And the interesting thing to me was when they asked the children, “Did you think that robot acted like a machine or an animal or was it a human?” They overwhelmingly said they thought it was human. And then when they asked, “Do you think it was suffering when you were hitting it with your water bottles and doing all that?” The majority of them said “Yes, I thought it was feeling distress.” And so, one wonders if the more we make these machine like people the more we, in essence, cheapen what it is to be people. Do you think there’s any danger of that, or am I just off in left field?
You know, I mean it’s a good question. I think maybe that strikes a little bit, Byron, to the heart of the question about how do we teach these things to learn? Because, again, going back to some of the concepts around the personalization element to it, the unsupervised learning techniques that are at the core of some of the AI and core machine learning concepts, they’re intended to—both unsupervised and predictive learning—try and emulate the way that humans, and the animals that you gave in the example earlier, learn.
Typically, we learn in a very unsupervised manner, by immersing ourselves in the world around us, and watching how it works, and then looking at how our parents or grandparents and other people and in our close communities, how they react to certain things. So there’s a very interesting difference, I think, between that and supervised learning which is: I’m going to tell you a thousand times that this is a car until you understand that this is a car? So it gets to be quite interesting the differences between the learnings themselves.
But do I think we’re in a danger? It’s interesting, you know, because I’m sure that there are elements of humanity where the perpetrators of those same things—I’m going to hit you with a bottle or whatever else—draw no distinction between hitting a person or hitting the robot. But maybe that’s a failure of their own neural programming that they think it’s okay to do that. So, I actually think, philosophically, from my perspective, that the more that we can make technology engaging, the more we can make technology seamless, we can weave it into the fabric of what we do every day.
I think it’s fascinating to see, as we mentioned before, the digital assistance and the way people use them. But the fact that that’s become so woven into the fabric, now, that there’s not even an app for many of these digital assistants, it’s just kind of built into the fabric. I think that could potentially tell us something about where this goes, and to get that true acceptance over time, I think we have to make these things as engaging as we can because they’re definitely here to stay. I mean I don’t see it as a threat to humanity, frankly. I know other guys out there, Professor Hawking, as an example, have said that it’s possibly the worst thing that could ever happen to humanity, the advent and the speed at which AI was coming into the world. But, again, I think if we can make it part of the fabric of what we do, and this is going to happen in cars, it’s going to happen in aircraft, it already is. It’s kind of part of what we do.
And to your point, Professor Hawking is talking, not about our PDAs, but about a general intelligence, which you’re, at the very least, saying it’s very far away. 
So, let’s talk about supervised and unsupervised learning for a minute. How far away do you think we are from a general learner that we can just point and say, “Here is the Internet, go learn everything”? I mean, that’s the Holy Grail isn’t it?
Absolutely, and wouldn’t that be great, but I think you have to step back and appreciate the differences between the different types of machine learnings. You know, of course, we say, “Hey, here’s the Internet go learn everything.” There’s stories out there about the length of time it takes to actually provide enough data sets and to provide those with the right algorithms so that when you look at a picture of a cat you realize it’s not a birthday cake. That sounds like a silly thing to say, but that’s not an insignificant piece of learning. Then you add in things like anomaly detection, regression, text analytics, and distinguishing between different images—I mean, that’s not easy.
Imagine taking every image that you could find on the internet. There’s a high probability that if you take twenty, thirty, forty, fifty common items that you would expect pretty much everybody from a five-year-old kid to a one-hundred-year-old great grandfather to be able to articulate what they are—that’s not an insignificant piece of learning for a machine. You’ve got to teach the model fifty different iterations of that until you get to the fact that ninety-nine percent of the time I’m going to tell you that this is a cat, this is a birthday cake, this is the Eiffel Tower.
To me it’s a very interesting question about the structured versus the unstructured learning capability, but I think you have to understand just how much goes into that from a model perspective in the background. So, things that we take for granted as part of our cognitive world—part of our own AGI as humans, if you want to call it that—is built on this unstructured unsupervised learning that we have which is very different from, obviously, the structured learning, but also it is something that we take for granted because it’s in our everyday world it’s the way we do it, we don’t have to program ourselves consciously to learn the differences between things. It would be great, wouldn’t it, to be able to just say, “Hey, here’s everything on the internet, here’s everything in the deep web, this is how you get to all, go and assimilate all that.” And then when I ask you a question you would be able to go to page 407 of this thesis document that would give you the answer. I think we’re a long way from that.
To your point, I can train a computer to recognize that that’s a unicorn, and a person can recognize it’s a unicorn. Then you say, “Okay, make it a cake in a unicorn shape.” And a human, even if they have never seen the unicorn cake, they would say, “Oh, that’s a cake.” And then it’s like, “Okay, make it a cake in a unicorn shape with a piece missing,” and then a human could look at it and say, “Oh, yeah that’s a unicorn cake with a piece missing.” Even if they have never seen one of these. Then it’s like, “Okay, make it stale like it’s been sitting out for a week,” and then a human can look at it and say, “Yep.” So what we’re doing even though we’ve never seen any of those combinations, we’re able to magnificently do transferred learning between all these different things. Is that a breakthrough? Is that a hundred little tricks we’re doing, or is that just something we’re going to need to figure out for computers and maybe in a very broad way we can solve that.
Yeah, I think it comes down to, again, the human element versus what we can impart and teach. One of the interesting things from my background, in the world I came from, was the breakthrough in 3D design. So, obviously, I came from an engineering and construction background, and I was around at the advent of 3D design, and one of the ironic things that used to strike me about 3D design is that we as humans see the world in 3D, and yet we always designed in two dimensions, and then we had this breakthrough of 3D design, and we’re designing in exactly the way that we see the world.
So I think there’s a few elements that are part of what we have as humans, which gets really interesting, because with the unicorn cake analogy and the missing piece, does the computer know that that’s a three dimensional object or does it see it in 2D? And if it sees it in 2D, would it have a different interpretation of what we see, because we can see that the cake’s base is this shape and the unicorn should look like this, etcetera.
I don’t know how far we are and I don’t know how quickly we could get there. And maybe we start at the “juice” that we talked about earlier. You know, how do you set a baseline and what is that baseline? Is it to say, you must have the following five things every time you want to make an interpretation of an object, or make a decision, and every one of those things changes. So, I don’t know how big or wide that baseline is for us to get to the point where we say, “Hey, if you have these basic building blocks in place this is how you get to that AGI, this is how you get to represent the human brain in as many use cases you could think that we have every day.”
Take that robot that we talked about earlier. Say you’re going past a ladder and you see a guy up there cleaning a window. As humans we would look at it and say, “Oh, there’s a risk that this guy is going to fall here.” Would the robot stop, and would he have the cognitive power to say, “Actually, I’m going to stop here, and I’m going to make a recommendation that this guy get somebody to hold the bottom of the ladder”? So, these are the kinds of things that I wrestle with and try and figure out, you know, how much of that building block would you have to have to make the rest of it be almost like replicating what we would do naturally.
It’s interesting because on your 3D vision thing, humans only see 3D for like twelve or thirteen feet, right? And then beyond that it’s all visual cues, right? We’re not actually seeing them, and we’re like faking it in our software of the brain aren’t we?
Yeah, I think that’s true. But again, that’s why I think it’d be very interesting to see some of the big technology companies out there investing in some 3D things, right? So if you think about what we’ve heard from Apple, as an example, what we’ve heard from Google—you know, I don’t think we’re anywhere yet in terms of our capability to deal with 3D through an augmented or virtual world.
And I think—again obviously with some of the machine learning and intelligence in the background—that’s going to open a significant set of opportunities for design, for construction, from my background, of course, but for tons of other things. I believe we really do see the world in different dimensions. You know, maybe there are even more dimensions that help, like the fourth dimension. If we decide that that’s time, can we actually see things machine learned before they happen? And is it better that they can augment what we do as humans, rather than try and replace?
So again, in the world that I came from, we spoke a lot about different dimensions—two dimensions, three dimensions and adding different dimensions for imagining massive facilities, oil refineries, airports, power stations or whatever it is—but we never really had the machine learning capabilities in there. So you think of all these things that can built over time, all the operational data that we have, all the design mistakes that we’ve made, all those things it just gets left on the cutting room floor because there’s no mechanism to deal with it. I think, fundamentally, that’s what happened with big data, in my opinion.
Nobody that I meet any more talks about big data. You know, that whole concept of big data was a five year ago question. It was about analytics; it was about business intelligence being done in a different way. And now that conversation has shifted completely to machine learning. How what can we learn? How can we make better decisions? How can we feed different data into the machine learning algorithms? How can we iterate on those? How can we build models, bigger, better, faster?
And I think there’s so much opportunity that’s out there. When you add in these other types of immersion, whether they be augmented, mixed, or virtual reality, as an example, what’s going to come next? Based on the fact that we have the data, we have the algorithms, and now perhaps all we need is a little bit more inspiration and a little bit more perspiration to really drive what I think could be some absolutely incredible applications of this technology in the future.
You maintain, just reading about you online, that enterprises really have to adopt AI today. This isn’t the time to wait. Assuming that that is true, why do you think that is? Make that case, please.
I’ll give me some examples, we have customers that are in extremely large financial sectors and components of the financial sector, and we have customers who do things like online gambling, as an example; we have all sorts of different other customers in healthcare, pharmaceutical, in retail and manufacturing—and I’ve failed, so far, to see a single industry where I think that some applied machine learning couldn’t help them significantly with their digital transformation efforts. We talk about this word “digital transformation efforts,” and yeah it’s a great buzz word, but really, to me, it’s a set of very distinct constructs that you either say, “Hey, I have to move to being data driven, I have to deal with that data in a different way, and I have to apply some of these techniques and technologies that we’ve been talking about, that actually help to drive different business outcomes.”
So, if you think about it in the context of, say, pharmaceutical, what are the next generation biotech companies doing to actually speed up the time of trials, and speed up the times of new drugs and bringing those to market? Knowing that in certain parts of the world there’s a very finite time on the license that you have to sell those drugs as a sole operator before they become generic. So, you’ve got a small window of advantage.
You know, it’s the same way with banking, and the same with finance. How can I get better at predicting what may happen? How can I get better at doing risk? And then, also, how can I get better at customer engagement by using ANI to drive a better customer engagement, defining better products, making the products more personal, making them more relevant and more timely.
I think all of these come down, in my mind, to the foundation that was laid with big data, I think is a good foundation, but to me it was missing the “so what?” I think now with the availability of the machine learning algorithms we know the “so what?”
The other bit of that which gets really interesting is that these are becoming commoditized very quickly—and people look at me with a scary face when I say that. But you think about where Microsoft is going, and you think about what IBM is trying to do, think about what AWS is doing, ultimately what Google is doing—these guys see the AI elements and the machine learning elements as the next frontier, and they want to provide those as a set of consumable services in the same way that you can go and get a blob of storage or you can go and buy a virtual machine.
I think that, to me, is a critical element. So yes, of course, you need data scientists, and you need people who understand what the data can do for you, what the machine learning can do for you as a business outcome. But I think the fact that it’s rapidly becoming commoditized and getting to the point now where you can, with a little bit of understanding, choose what kind of machine learning service that you want and for what reason and then you can add that into your next generation of, quote, “application,” which really is going to drive some pretty interesting results.
I think it’s not a case of the fact that people no longer can afford to do it. I think it’s a case of the fact that they just can’t afford not to do it. You know, as I mentioned before, there’s lots and lots of different types—I think Microsoft alone have half a dozen or more different types of machine learning concepts that they offer as services within Azure. But I think the speed that that is entered, and the speed that that is getting to be commodity, I think, will ultimately be the game changer.
You know, there’s a talent shortage that everybody talks about, a shortage of people who are up on these techniques. Is that how you see that talent shortage being solved? That the tools essentially are made more accessible to existing coders, or do you think we’re about to have a surge of new talent come in, or a combination of both? Do you think the talent deficit is going to go away anytime soon?
Well, I mean, it’s interesting if you believe some of the stories in the recent press about Google, they went out and hired an entire class of computer science graduates who specialized in statistics and machine learning. So, if you believe that, then okay that would make a ton of sense, investing in that next generation of talent is a great thing. I wonder, frankly, if there aren’t existing roles that will get repurposed. I mean, if you go back years and years, and think about it,  this is not a new challenge—even within IT, I mean it’s certainly not new in terms of industry in general,  but even within IT.
I mean, it would be extremely unlikely now that you could walk into any large IT organization within any large global enterprise, and expect to see PBX phone systems existing in dedicated rooms, because all of that converged on the network almost a decade ago now. And as it’s become more and more accepted and more and more defacto, we’ve seen the end of that skill set. So, the people that were the command line interface guys for huge telephone systems, they reskilled to be network people.
And if you think about that in parallel, some people who used to be developers in organizations who were writing applications that the organization had defined as being required to be bespoke, that’s ebbing away a little bit now in terms of software as a service adoption, and standardization on things like Salesforce or Workday or Concur or whatever it is. And so, I think, the other developers are either going off to find new jobs in other locations, or in many cases they’re kind of retraining as integration specialists or business process people.
I think it’s a combination of different things, Byron, but absolutely that skill set needs to come in. You know, people who are in data science roles, have statistics backgrounds, either applied or pure math, in some cases, that’s all great, but do they have the business knowledge and the business process understanding to actually get the value and demonstrate the value from the algorithms that they create or take onboard as part of services from the different cloud providers?
I think it’s a combination of everything. I think, fundamentally, there’s going to be a mixed skill set. I think there is going to be a fight for data scientists, for sure. I think there’s going to be a fight for people who can write algorithms and especially ones who can write it in the context of the business. But I don’t think it’s an exclusive club, I think, like all these things, that we are gradually turning the crank on yet another major cycle of technology.
I think what’s happened is that the relative time for that technology to be adopted is definitely getting shorter, on one axis, and the value derived from that is actually getting higher, on another axis. So it feels like all this is coming at once, but I don’t think it’s a mutually exclusive world, because I think we’re going to rely on combinations of those skills—business skills and traditional database skills and then the more advanced data science skills—to really come together and drive the true value.
There’s, obviously, a larger conversation going on around the world about the effect of automation and ANI on employment. What is your view on that? How is that going to unfold?
Well I’m sure the same conversation happened a hundred years ago with the automation of the car plant, which was led by the Ford Motor Company. And I’m sure at the same time there was as much uproar that this would be the end of humans, effectively, in the automotive industry. We now know that that wasn’t the case, of course. Yes, of course, there have been jobs displaced by automation, but they created other roles that we didn’t necessarily know about.
So, I think, absolutely. Take the case of call centers as a good example. If we could come up with a sufficiently well-balanced ANI that was able to, very quickly, displace eighty percent of what you would call standard calls, then of course there’s a concern. But, I think that perhaps the bigger concern is that those jobs—and I don’t want to use the phrase “low end” because it sounds a little bit trite—are the kind of jobs that we would associate with non-academia, people who haven’t got a bunch of different qualifications for this that and the other, which you need, right?
It’s the same argument, in a weird way, that’s been raging through Europe and the US about immigration, and the question that, “Well, if you take all of these jobs away, jobs that people wouldn’t do by choice, what happens?” The fact is that you’ll never ever get to a scenario where everybody wants every job, but there ought to be room for everyone. So it gets to be a very social question. It gets to be quite a moralistic question, as well, in many cases. You know, would you, as an organization, prefer to employ people or would you prefer to have a machine do that that can keep your costs down, and it can improve your competitiveness-sphere, it can improve your profitability—then that’s a hard business question.
So I think the answer is, yes, there will be some displacement of jobs. They’re highly likely to be the entry level jobs, or ones that are ripe for automation. But does that mean that that will give us a huge global socioeconomic problem? I don’t know. I mean I think it’s highly likely that there will be different jobs—whether that’s in the same industries or in different industries—that are created as part of this.
I hear lots of people saying, “Well, we’re now building robots that can maintain themselves, that can replace their own parts.” Yeah, kind of, but CNC milling machines were capable of building themselves from every part that you need to fabricate them, but you still need somebody to put them together and to maintain them and to look after them, right? So, I think, it’s a very interesting question. There will, certainly, in my opinion, be some displacement but my hope is that, like we’ve seen before in different phases of “industrial revolutions,” again in quote marks, we’ve always managed to find new industries or find new things to do that are a direct result, in some cases, of that automation. So I’m hopeful it will play out the same way.
I’m very sympathetic with that position. I mean, we can even look to more recent history—I doubt Tim Berners-Lee, when he invented the web, said this will create trillions of dollars in wealth and it’s going to create Etsy and eBay and Google and Amazon and Uber and everything else. And AI is so much bigger. And it is true what you say, an assembly line is a form of artificial intelligence, and it must have been a very threatening time. Then you can look and say, “Yeah, we’ve replaced all animal power on the planet with machines in a very short amount of time but that didn’t cause a surge in unemployment.” And so you’re right that history, up until 2000, supports that view. 
I think the arguments that people put forth in the “this time it’s different” camp, the first one is something you just said a minute ago, which is the axis of the speed of the adoption of these technologies is much faster, and it’s that speed that’s going to get us. Do you give any credence to that?
Oh, absolutely. With that speed, I think, comes the potential for exponential growth in different areas, different parts of the business, which, from a fundamental operating concept of running a business, is either a blessing or a curse. Because, if you’re not ready for it… And I think that there are some questions out there about, will the adoption of AI machine learning actually drive the speed of new business or business growth so it turns exponential?
There’s a famous story, which I’m sure you’ve heard before, Byron, but I’ll share it with the listeners, about the football stadium, which asks the question do you really understand exponential growth. So the analogy goes something like: it’s 1:00 o’clock in the afternoon, and you’re sat in the best seat at the very top of a medium sized football stadium, and for the sake of illustration the stadium is actually watertight. And, so the question is, if a drop of water is added to the stadium on the halfway line, and then one minute later it doubles in size to two drops, and then after one more minute it doubles to four drops and so on—basically, it doubles in size every minute—if you’re there at 1:00 in the afternoon, what time is it before the water reaches the very top of the stadium and effectively engulfs the seat you’re sat in? And people say, “Oh it’s going to be months, it’ll be years.” It’s actually 49 minutes.
So, from that very first drop of water it doubling and doubling and doubling every minute, by the time that the 50-minute mark comes, the entire stadium is full of water. If you can picture that, mentally, the question about the speed, so that it’s 49 minutes for that to happen, but it’s really based on the fact that exponential growth is not the way that we imagine, you know, double digit growth to be in the traditional ways that we look at compound annual growth rates of businesses or that kind of thing.
So the question is, if that does come along, to your point about the speed and does that speed equal exponential growth, then the question is: are we ready for that if indeed that size and scale is predicated upon some of these new technologies? And I think that’s a fascinating conversation.
Another discussion that’s been had, especially in Europe, is this idea of the right to know. If an artificial intelligence makes a decision about you, like, a declined loan or something, you have a right to understand why that is. What is your view of that? First of all, is that a good thing? And second is it a possible thing? Are these neural nets just inherently un–understandable?
Well, I think, certainly in the UK we’ve seen examples of that, you know, the decision making systems that are used by banks for approving personal loans and mortgages. Things that once would have required you to visit the branch and sit down with the branch manager, for him to understand your aspirations and for him to have the final decision as the empowered person from the bank, I think those days are pretty much gone. Now there is the neural construct that makes the decision based on a bunch of factors that are employed at the point of the decision—no credit reference, age, time at your company, your salary, your available free funds and a lot of them—and, I think, the personal side of it is gone.
I think removing that emotion is a challenge because—there’s a phrase in England that came from a TV comedy series that says, “computer says no”—and, so, it’s literally a case of if I get declined, what do I do? Do I have the same problem if I go to another financial institution? Should I really have the right to know what factors were part of the decision making process, and ultimately where I failed to meet those criteria that were set by either underwriters or some of the mitigation steps? So I think it’s definitely very visible here in the UK.
You know, we tend to accept that the power for those kinds of decisions, life changing decisions in some cases through mortgages or loans, has really gone from the hands of the local bank branch—and in fact many of those local bank branches no longer exist, you know, we’ve seen those disappear from towns and villages and cities across the UK routinely—to the decision being made by an ANI, and certainly not with the emotion and the considerations that we talked about from an AGI perspective. But people will tell you, “Hey, we’ve got lots and lots of statistical models on this. You see how we build up risk analyses. We do this routinely to see if you are considered to be a risk or a safe bet.” And that’s how we make the decision on you, and it really isn’t very personal anymore.
And what do you think about the use of this technology in warfare and in weapons? That seems to be another area where there’s rapid adoption. Do you have any views on that?
Well, I think this becomes a very interesting question if you take the fact that in battlefield operations very recently, and the ones that are unfortunately still going on in some parts of the Middle East, it’s extremely conceivable that some of the weaponry being used, and some of the drones that are being flown are being flown from literally thousands and thousands of miles away from the theatre of war, from the scene of the battle.
Now, I suppose one of the answers is that it’s possibly a good thing for the coalition, or for it’s for the people on this side of the conversation, because the fewer people you can put in harm’s way, the more you can neutralize the enemy without putting people in harm’s way, then… Is it a good thing? Is it a bad thing? I mean, I have to say, from a personal perspective, I don’t think any war is a good thing no matter what technology or historical weaponry you use, but I think it’s a fact of life.
If you think about that from a drone perspective, or from an aviation perspective, in general, we don’t call aviation “artificial aviation” because it’s not birds. You know, so should we really be calling artificial intelligence “artificial” at all if it constitutes some kind of intelligence that helps with the decision making process. So, my philosophy on that is that the less people you can put in harm’s way, in any situation, the better.
And having come from, obviously, a construction background where construction sites are inherently dangerous and having drones do tasks that you would usually put humans in the way—of construction sites are different than the theatre of war—but there’s an element of risk there, there is an element of potential fatalities. And I think anytime we can employ technology to go and do surveys, to go and calculate how much concrete has been poured, how much asphalt has been laid, you know, how much land has been reclaimed. I mean, these are things that we should be employing this technology to do, and then feeding all of that data and that intelligence back into, ultimately, providing a better opportunity to do more reliable design, and more cost effective design, and, hopefully, more robust design which will continue to make the world a safer place.
Only one more question along those lines, this one from a cyber-security standpoint. We see more and more of these security breaches in big companies and governments, and they seem to be getting bigger and bigger and more and more frequent. Do you think artificial intelligence, at least in the foreseeable future, is enabling the bad actor to attack, or is it enabling the good actor to defend? 
I, unfortunately, I think it’s both. I would love to tell you that I think we—and I say we as an industry—have the advantage, but I guess we’ve seen examples of where that’s been very much in the hands of the bad actors. You know, we’ve heard a lot about different state-sponsored attacks that have used all sorts of sophisticated techniques. But, I guess, if you think about it from the point of view of where the industry is, where some of the focus areas are within the industry in general, I think it’s high time we actually focused on the user behavior. Our weakest link has, kind of, always been users.
You know, we’ve thrown technology at security problems for a very long time, but I think about it in a very simple way that if we can build up an idea of what we would consider to be normal user behavior. Then the more data points that we collect, the more we can feed in, the more we can train these models, the easier we can spot anomalies. And I think that’s true for other types of network traffic and monitoring.
If you think about it from the user perspective, an analogy I like to use with that, Byron, is, I travel a lot with my job. I’m very fortunate to go to all sorts of places around the world and meet all sorts of fantastically interesting customers, partners, and so on. But I can’t get away from the fact that every time I step off the plane, and I go to the ATM machine, the first thing that happens is that I get an “access denied” message. Then I have to call the bank, and they have to send me a one-time password, and I have to actually say, “Hey, I’m in Turkey, I’m in Portugal, I’m in the United States. It’s really me. I’m trying to make a valid transaction.” So, even though it’s a little bit of a pain, I actually prefer it that way, more than for somebody to have cloned my card and be using it all the way around the world and leaving me with the headache of trying to figure it out with the bank.
I actually like to think about it in a similar way. If we can build up a good set of rich data about what we would classify typical user behavior, so, “Christian logs in from this place, he always uses this device, he always accesses these kinds of applications,” build that up, iterate on it, and then when something is outside of that, allow decisions to be made—either closed loop or through some human interaction—that says, “Hey, this doesn’t look right, I think you need to do something.”
I think, when we get that, we can apply that into a bunch of different contexts. In healthcare, where we’re doing patient monitoring at home, you know, “I’m looking at your vital statistics I consider this to be normal, but if your blood pressure drops or your heart rate increases, I’m going to flag it to your physician.” And there’s a bunch of other things that we could imagine are all about the user, and all about what we would classify as normal behavior or normal characteristics, and then we’ll be able to, either, action things automatically, or action things with human augmentation, when things don’t look like they’re normal.
So, I think that’s the one thing that I look at in terms of the next frontier of security. It really has to focus on that. Because you can build a castle and a moat, and you can argue that, to keep the bad guys out you just need to keep building the walls higher. But the reality is that we don’t live like that. We live in metropolitan cities, we don’t live in castles in forests anymore. So I think we have to approach that a different way.
And certainly, by building up a very rich set of data and training these models on what we would call normal use of behavior, I think we’ve got a much better chance of fighting things that don’t look normal, that could obviously be the impact of an account takeover, or credential harvesting attack, or somebody impersonating me in either a personal or business way.
Tell me a little bit about your role at Citrix. What do you do there, and how is Citrix using artificial intelligence? What are you doing in this area that might be of interest to a general business audience?
There’s a couple of things. One, that I’ve just talked about, is what we now call the Citrix Analytics Service. So, at Citrix, we’re very privileged to be a very key part of most of our customers’ application delivery, from either inside their offices or for that mobile workforce, or their home workers, or contractors, or partners, or whatever that is. So, we sit in a very key position in terms of the user interaction, where users come from, what devices they’re on, and we’re able to build up this rich set of information around the user. So, that’s absolutely what we’re focused on within the Citrix Analytics Service. What you’ll see towards the end of this year and then early into 2018 are releases of that Citrix Analytics Service based on our Citrix cloud platform. That will be something that we bring to market very quickly.
That’s a security thing, that’s all about protecting, but what about enablement? So we build these secure digital workspaces that aggregate different types of applications and different types of services across different types of clouds, but how can we actually mine what people do, so we actually provide them with the context of—depending on who you are, depending on where you are physically, depending on which device you’re coming from, and depending on what you’re trying to do to be productive and get your job done—we should be able to deliver that content, that context, and that information in a real time way.
So, if you’re a maintenance engineer working on this particular part of an airport, or you’re a physician working in an MRI review room in a health care environment, we should know all of the information around you—not just from a security perspective. So, it’s not really always about just trying to figure out what’s going wrong, but using similar approaches and similar models to actually deliver what you should expect at that point of engagement. So, based on the time that you log in, the place that you log in, the device that you log in from; delivering the context so that you can be productive.
It’s, kind of, two different things which are based on the same end user philosophy. One is very much about helping IT to deal with security compliance control, and then the other one is really about the end user experience and helping to drive individual and ultimately business productivity, across pretty much every customer in every vertical that we provide services to.
How do you, from an organizational standpoint, think of artificial intelligence implementation. When the web first came out, people had a web department, but, of course, now that idea you wouldn’t do. Just in terms of general structure, do you even talk about AI or is it just kind of assumed that it’s driving all of all of your future product developments?
Yeah, it’s absolutely an integral part. You know, there’s a phrase that I use, that “we’re very data rich but very information poor.” That’s because the ways in which we gathered data were on a product-by-product basis. So, we’ve kind of changed the model with that, and turned the pyramid around, effectively, by thinking about data first, thinking about how we capture it, how we interface with other vendors that we work very closely with. You know, how do we bring all that data together to have an environment where we can leverage it?
That sounds like an easy thing to do, but it’s actually quite difficult. So, we have a bunch of very smart data science guys who are intrinsic to our product development, intrinsic to the analytics side that I talked about. These are the guys who are helping us to pull all that data together, to bring it all into one place, so that we can apply these new algorithms and these new techniques on that. But, yeah, absolutely, it’s a core part of our security and our productivity and performance offerings going forward.
And we believe that it’s a big differentiator for us, because of where we sit, because of the longevity we have in our customer environments, and because our customers trust Citrix to deliver mission critical applications, and they will hopefully continue to put that same trust in us when it comes to security and all sorts of productivity. So, we’re really excited about what that means going forward.
We’re coming up to the close here, and it sounds like, overall, you’re very optimistic about the future. Is that true? Tell me what you think, overall, life will be like in ten years?
You know, I think we are going to get more and more things powered by AI than we realize. And I think the true measure of success will be when we stop talking about the AI as being part of x, y and z and talking about the benefit that it brings. I can very easily imagine that when you wake up in the morning and you want to talk to your digital assistant say, “Hey, how many meetings have I got today?” you know, all the videos where the guy’s brushing his teeth and saying, “Hey, what am I going to do today?” That’s all very real.
I think what will happen is that those worlds of work and life, if they’re not already completely blended, will effectively continue to blend. I think if you take some views into the future—and it’s certainly not ten years out, it’s much less than that—there’s going to be some significant shifts. The number of millennials that enter the workforce will be around seventy to seventy-five percent by like 2022 or 2023, that’s significant. That’s a really big change. And I think organizations are already adapting to that, and adopting new philosophies around the way that people work, where people work, the environments that are created, the devices that they’re allowed to use will continue to evolve and continue to change. So, I think we’ll see work as we know it evolve from where it is today at that exponential rate that I talked about earlier, and I think organizations have to get ready for it.
I don’t think it’s a ten-year thing. I think it will be up to organizations to decide how to deploy and adopt, but I think the technology, the offerings, will be ready way before that. And again I think it’s one of these things where you look at my past twenty-something years in this industry as a customer, and now as a technology provider, and I think if you take on balance all the things that we’ve seen, this feels like a seismic shift, it really does.
I think the fact that we’re going to be dealing with intelligent machines alongside intelligent humans is going to be hugely beneficial. And I think it’s also going to be extremely impactful in developing countries where they don’t have a legacy to deal with, where they haven’t gone through the thirty, forty years of technology that we’ve had in enterprise.
So, I think what it will also do is it will level the playing field for a lot of people and I think that will also drive some very interesting prospects and some very interesting statistics for a whole new middle class of people which I think is a long overdue. And I think that will be great. Ultimately, I hope it will be extremely beneficial, literally, in every corner of the globe.
All right well that’s a great place to leave it. I want to thank you for a wide-ranging conversation on a bunch of these topics. I appreciate your time, Christian.
Thanks Byron, it’s been a pleasure.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
0 notes
applitools · 7 years
Text
2018 Test Automation Trends: Must-have Tools & Skills Required to Rock 2018
As 2017 comes to a close, it’s been a great time to reflect back on the year and look ahead to what 2018 might have in store, especially in regards to software testing and automation. To dig into this a little deeper, we recently hosted a webinar featuring some of the software testing industry’s most prominent experts and thought leaders discussing the hottest tools, technologies and trends to look out for in 2018. Our guests included host Joe Colantonio, founder of Automation Guild Conference; Angie Jones, prolific speaker and senior software engineer in test at Twitter; Richard Bradshaw, software tester, speaker and trainer at Friendly Testing; and, our very own Gil Tayar, senior architect and evangelist.
During the webinar, they discussed the top test automation strategies for 2018, how AI will affect automation, the new tools and tech you should explore, and much more. Continue reading for some of their top insights that will help prepare you for the year to come:
Colantonio reflects on some of the trends he has seen in 2017: “As I speak to a lot of companies I see a lot of digital transformations going on. I see a lot of studies saying QA and testing investment is going to be really heavy the next five years. Also, what I’ve been seeing is a lot of companies are shifting to the left, so they’re investing more and more in how they can automate their process to make it quicker for the software development.”
With the adoption of automation becoming more widespread, Jones says that companies will be thinking about how to use it more strategically in 2018: “I think in 2018 we’ll see people taking a step back and looking at what is it that they actually want to gain from automation, and how to best do that. I’m also seeing a lot of teams now wanting to embrace DevOps. As we are moving into that space we see testing, automation, development, and everything is moving a lot faster. There’s definitely a need for automation to not just be running on someone’s local machine or running along somewhere on a server, but to actually be gating check-ins and giving teams confidence.”
Bradshaw offers some advice on how to speed up your software delivery in 2018: “I think one of the opportunities to take advantage of in 2018 is to step back a bit, have a reflection of the skills that you’ve developed. How good is your programming now? How many tools are in your tool box? Start to look where we can apply them throughout the software development lifecycle. The things that I’m looking at myself is our automated checks and feedback loops. So, having a look at all the checks you have and reducing some of them. Do you need them all? Are they valuable? And that will help you speed up.”
Tayar hopes that more companies will embrace the shift-left movement and put more of the testing responsibility on developers: “From my perspective, the shift left movement, where the boring parts of testing, the regression testing, is moving towards development and towards developers. I think this is a trend that I’m seeing more and more 2017 and in mostly advanced, high performing companies. Hopefully in 2018 more companies will be able to do more and more testing on the developer side. That will enable testers to find the more interesting problems to do. Not just regression test, but more interesting things with their testing time.”
Colantonio on the importance of testing at the API level: “As more folks move towards continuous integration and continuous delivery, I think we need integration tests and to give faster feedback that isn’t just about the UI. I think API testing is really a critical piece of that. If the business layer is not in the UI, then we should test that at the API level. So that’s a great opportunity. I’m not sure why more people aren’t getting into it.”
Consider developing skills in web services automation in 2018, advises Jones: “I can probably count on one hand the people who are experts in web services automation, and that’s a problem because there’s a big demand for that right now. So I think this is an opportunity where if you are looking to get into automation, or if you’re already in automation and you’ve been focused very heavily on UI automation, there’s an opportunity here for people to advance their skills and look into web services automation.”
For AI tools to provide value, testers first need to understand the purpose of their test, explains Bradshaw: “The use of AI that I’m seeing advertised at the moment by lots of companies is to help them. They’re using AI to help the tool design automated checks. That sends alarm bells off in me because I don’t think many testers out there even know what they’re doing. Not that they don’t know when they’re testing, and that they don’t know their own thought process. They don’t actually understand what they’re going through. They don’t know why they find bugs. They don’t know why they do the tests that they do, they just do them.”
Simple advice from Tayar is to embrace agility: “Agility is the movement towards faster release cycles, and that fuels the need for developers to create their own tests. Once developers create their own tests that leaves testers to do the really, really important stuff. To test from a thinking perspective, and not so much doing the work that developers should’ve done by themselves. So if there’s one thing to remember, it’s to embrace agility. Use that to do better testing, better automation, better thinking about what you’re testing so that your company will get an advantage from that.”
Don’t just put blind faith into your AI tools, warns Jones: “In 2018, I don’t think we’re going to see a bunch of AI being used to assist us, but I think we could use 2018 to take the opportunity to understand, what is AI? And reveal how it will be used to help us, and if it can really even help us. A lot of times we look at AI as, ‘Oh, it’s this perfect thing.’ And I can tell you that it’s not perfect. I’m working on some applications here, and even before I got here to Twitter, products that are using machine learning, which is a subset of AI. It can’t be a black box that I think works magically. I have to understand what it’s supposed to do and how to test that.”
Bradshaw: “I would like to see some tools come into the market that are specifically designed to support me and my testing. They’re looking at what I’m doing on the screen and they might fire heuristics at me. They might say, ‘Richard, you seem to have done some kind of testing like this many years, or a few months ago, a few releases ago. And then you used this heuristic and found a bug, so why don’t you try that heuristic?’ Or have it as a little bot that’s just there helping me do my job, like take a screenshot for me and it automatically put it on my tickets. But in terms of the AI bit, just having it prompt me to help me think about what I would test based on what I’ve done in the past, I think that is a better use for AI at the moment.”
Will AI be taking over testers jobs? A resounding “No” says Tayar: “I get a lot of questions about, ‘Do I need to worry about my job?’ And the answer is a very, very emphatic, ‘No, you do not.’ We’re very early. We’re in the early, early phases of using AI in testing. And I believe that in 10 years and maybe even more we will be using AI as a tool and not so much as a replacement for testers. A tool in that it will be able to, for example, visually compare stuff so that you will be able to find your bugs in a quicker way, and not check every field that you need to fill, but just holistically check the whole page in one go. It will be able to find lots of changes in lots of pages at one time.”
Bradshaw: “My actionable advice right now would be go back into your office, go through your automated checks that you have now and try and delete five of them. And try and go understand them all, review them, study them, continuously review them, and delete them down to the ones that really matter. Because I can go until you probably all have some that are just providing no value at all, but they are taking a few minutes to run every time you do it. So continuously review those checks.”
Watch the full replay:
youtube
h2, h3, h4, h5 { font-weight: bold } figcaption { font-size: 10px; line-height: 1.4; color: grey; letter-spacing: 0; position: relative; left: 0; width: 100%; top: 0; margin-bottom: 15px; text-align: center; z-index: 300; } figure img { margin: 0; }
Tumblr media
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 12: A Conversation with Scott Clark
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 12: A Conversation with Scott Clark”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 12: A Conversation with Scott Clark syndicated from http://ift.tt/2wBRU5Z
0 notes
babbleuk · 7 years
Text
Voices in AI – Episode 12: A Conversation with Scott Clark
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 12: A Conversation with Scott Clark","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } from Gigaom https://gigaom.com/2017/10/17/voices-in-ai-episode-12-a-conversation-with-scott-clark/
0 notes
thementalattic · 7 years
Text
The countryside is full of demons and only a deadpan Monster hunter can save the day, one stage at a time. Is it The Witcher or Van Helsing? No, it’s the new boy, Victor Vran.
Genre(s): Action RPG
Developer: Haemimont Games
Publisher: EuroVideo Medien
Release Date: June 2017
Played Main Story, Fractured Worlds, Motörhead: Through the Ages
Platforms: PC
Purchase At: Steam
Good:
Jumping.
Level challenges.
Motörhead.
Bad:
Subpar plot.
Simple bosses.
Review
Victor Vran begins with our eponymous hunter arriving in a demon infested kingdom looking for his friend and fellow hunter, Adrian. After a few battles with demonic spiders and skeletons, he reaches the royal palace where he learns the kingdom has been under siege and he’s not the first hunter to come to their aid, he’s just one of two who have survived this long, the rest lured to different locales and killed off by powerful enemies.
There’s something amiss, of course, and Victor doesn’t want to be involved, but the strange voice in his head keeps taunting him, convincing him there is more to the tale and deciding to stay and clear out the demonic infestation.
I gotta say, I loved the mysterious voice talking to you. It’s of course the voice of a villain, but the mocking tone in his voice and how he plays with your in-game choices is just brilliant. It’s the kind of villain you love to hate or just, you know, love.
But the first thing that I loved about Victor Vran was the ability to jump. Have you ever played a Diablo-esque game and wished you could just jump over the barrier or down a platform? Well, Victor Vran answers that question for you and it’s just amazing! You can essentially make your own way across the different levels, and while you can’t jump across every hole or platform, you quickly learn to recognise where you can take advantage of platforming and put it to good use. The best thing about it is that Victor can Wall-Jump, so you can easily reach high ledges or different levels of the stage by jumping over its borders. It’s phenomenal.
Hell no on the hardcore, game’s tough enough as it is!
  Speaking of Diablo, while other games in the genre give you classes with different skill sets, Victor Vran goes in another direction. When you start the game you choose an outfit, which determines character focus and how they build Overdrive, the power source for your demonic powers, which come from equipped items. You can equip two of these, which means you can come up with some interesting combinations, mine were a berserking state and a shield, to offset the extra damage I receive during the berserker rage. When it comes to skills though, your weapons determine those, as each weapon type has its own built-in abilities. What it lacks in variety it gives in familiarity, as you’ll know just how to play every long sword or rapier you pick up.
Speaking of levels, Victor Vran is not a game where you seamlessly go from one area to the next, but instead you pick the next level from a central hub. Best thing about it though is that each level has a set of challenges, granting you extra experience, money and gear if you complete them. Some are about killing a given monster or group without taking damage, or without using some restorative items or special powers, and some are just about taking out secret enemies with specific weaponry.
Nothing like a guitar to kill demons!
  Though the challenges add to the gameplay considerably, as they force you to adapt to new situations, they can lead to some frustration of course, particularly those that place heavy restrictions on your skills and items against the difficulty of the level.
The main campaign is good mainly because of its charismatic villain, because the plot is monumentally uninteresting and stretches itself out by forcing you to visit irrelevant locales before enabling you to go to the location with the next plot point, the one you wanted to enter from the first moment you reached the map.
The first time I let it go, but when it became a trend, it annoyed me greatly. Later on, Victor Vran forces you to go after the most generic and bland demon generals, the worst being the two giant spiders which look exactly like the other giant spiders. These boss fights are also too simple, the boss AI too easy to manipulate and only the last boss being anything remotely close to lethal. Their major trick is to summon normal mooks, which is a sin of boss design in my book.
Before playing the other game modes included in Victor Vran: Overkill Edition, I would’ve complained of the lack of variety in enemies, as the ones you fight in the main campaign can be divided into palette swaps of four groups: spiders, skeletons, vampires and wraiths. But thankfully, the two addons: Fractured Worlds and Motörhead: Through the Ages add such a refreshing number of enemies that I can’t complain about it anymore. Fractured Worlds sends you on a long quest across so many varying maps and enemies you’ll never grow tired of the variety.
I like the visual design for Victor Vran, as even in its most shadowy or derelict, or even haunted level, there is plenty of colour, either from the environmental design itself or from the many abilities used by Victor or his enemies.
In terms of sound, the most noteworthy parts are Victor’s voice, which is identical to Geralt of Rivia’s, as they share the same actor and it seems as if he can’t do another voice these days, and the second is the music. Whilst the main campaign music is pretty good, particularly during the boss fights and mysterious scenes, the best music is of course the Motörhead tracks in the band’s addon, where you fight across a World War II broken landscape trying to rescue that hellish world from Hitler while listening to Motörhead. It’s awesome.
#gallery-0-4 { margin: auto; } #gallery-0-4 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-4 img { border: 2px solid #cfcfcf; } #gallery-0-4 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
One of the many gear peddlers in the game
Hey Queenie
Your snazzy central hub
Ghost Hunters, gather!!
Hexes really add difficulty to the game.
Just another spider
Blandest lich I’ve ever seen
Apocalypse, just another spectre
Love to jump!
Too many damn spiders in the main campaign
The power or rock compels you!
Wall-e!!
Tell me a story, Lloyd
Thanks for the axe, Lemmy!
Conclusion
Victor Vran was already a pretty fun game, but the Overkill edition’s addons, Fractured Worlds and Motörhead: Through the Ages really bring the game to a new level, with excellent music, a large variety of enemies and heavy metal weaponry!
TMA SCORE:
5/5 – Hell Yes!
Played @victorvran #OverkillEdition with awesome #Motorhead! Here’s our review
The countryside is full of demons and only a deadpan Monster hunter can save the day, one stage at a time.
Played @victorvran #OverkillEdition with awesome #Motorhead! Here's our review The countryside is full of demons and only a deadpan Monster hunter can save the day, one stage at a time.
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 12: A Conversation with Scott Clark
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 12: A Conversation with Scott Clark”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 12: A Conversation with Scott Clark syndicated from http://ift.tt/2wBRU5Z
0 notes
clarenceomoore · 7 years
Text
Voices in AI – Episode 12: A Conversation with Scott Clark
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 12: A Conversation with Scott Clark","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand��� that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
0 notes