#A.I Program Mint
Explore tagged Tumblr posts
crystalsandbubbletea · 2 years ago
Text
After what felt like forever I finally finished drawing Mint as a human-
Tumblr media
AAAAAAAAAA-
MY BBG! (/J /J /J-)
But yeah, this is Mint in her human form-
I definitely plan on drawing her again (Although maybe do the official A.I design before drawing more of her human form-)
I don't remember how long this took- :')
⚠️⚠️DO NOT REPOST MY ART TO OTHER SITES WITHOUT MY PERMISSION⚠️⚠️
⚠️⚠️DO NOT REPOST MY ART TO OTHER SITES WITHOUT CREDIT⚠️⚠️
1 note · View note
crystalsandbubbletea · 2 years ago
Text
I actually swapped the names for Mint, Ghost, Basil, and Logic-
Originally Logic was going to be Mint and Mint was going to be Logic. Meanwhile Ghost was going to be Basil and Basil was going to be Ghost.
At some point I also had Basil's name as Chervil.
Berat was originally going to be called 'Dare' but I decided to change their name to be something from their mother's culture.
I don't remember what Alex's original name was going to be-
Izmir was originally going to be called Rima, but I changed it to be her middle name-
Finally, Sila was originally planned to not have a name, but I thought to myself "If she's going to appear in my fanfiction series, I should probably give her a name-"
I spent a long time trying to find a name that I thought would make sense for her, and eventually I came across the name 'Sila', and I thought the name meaning suited her. :}
Have you ever changed your OC's name?
34 notes · View notes
thebiblesalesman · 5 years ago
Text
Hero of Numbani new canon brands, products, tech, concepts, etc.
*SPOILERS FOR The Hero of Numbani AHEAD*
This list does not include Yoruba or Afro-Brazilian food/clothing/cultural concepts, which probably deserve their own list.
344X-Azúcar - A wireless signal used by Sombra.
3-D Puzzle: Horizon Lunar Colony - Efi and her cousin Bisi spent 400 hours putting this together.
Alatise Parkway - Street in the Numbani Arts District where a gray market auction house can be found.
Bankolé’s Grocery - A neighborhood grocery store.
Bello tower - Tower where Efi’s cousin Dayo lives.
Bisi’s laptop - An extremely high-end laptop that Bisi tries to gift to Efi. Capable of 3D holoprojection. Can crush logic exercises that cause “the most advanced omnics” to struggle in seconds.
Blanchet771 - A literary icon beloved by Efi’s father.
Breaking Circuits trilogy - A Flash Brighton movie trilogy where Flash is a human pretending to be an omnic pretending to be a human.
Carnival Calabar - Real-life Nigerian festival that occurs December 1-31. Efi mentions that Unity Day rivals Carnival Calabar in size.
Chinua Achebe - Real-life Nigerian novelist (1930-2013). A literary icon beloved by Efi’s father.
Compass Point Insurance - An insurance company used by Efi’s family.
Court-appointed cybernetic surgeon - Employed by world governing organizations for removing cybernetic implants from war criminals. One such surgeon removed most of Doomfist’s implants after his trial, a procedure apparently broadcast out to the whole world.
CraftLife 5000 - Premium power tools including a hard-light screwdriver.
Cybernetic brain upgrades - According to Efi, Sojourn has these upgrades, which consist of “bionic neurons” injected into the brain
Cyborg African wild dogs - These are things that exist. Efi encounters two dogs with spinal implants and flickers of green light in their eyes.
Dagger Sect - Enemy agents in the Breaking Circuits movie trilogy
Declaration of Unity - A declaration read and signed by Gabrielle Adawe during the founding of Numbani.
Delivery drone - Used to deliver things. Some are powerful enough to carry an 11-year-old.
efi_was_here_v3-39x.aipm - The name of Orisa’s personality matrix.
eNaira - Electronic Nigerian currency.
FacePunch - Site where Efi finds a lot of jokes and memes, such as Marley the Dancing Coconut.
Fadeout/Fading out - An omnic condition induced by a malicious code that self-replicates and overwrites the omnic’s native functions one-by-one. The name comes from the lights on an omnic’s head fading out when it is affected by this condition. Omnics experience severe program malfunctions during fadeout. For example, an omnic forklift operator might keep driving the forklift into a wall over and over again, or an omnic artist might destroy her own work. Fadeout also causes an omnic’s private wireless port to become publicly accessible, and if the malicious code is allowed to operate for long enough the omnic can lose all original programming and memory. Fadeouts were induced in Numbani by Sombra for a time after Doomfist’s attack on the airport. Because fadeout causes omnics’ wireless ports to become publicly accessible, an “antidote” code can be administered to one omnic and will be automatically distributed to all other affected omnics in the area.
Flash Brighton - An action movie character with numerous franchise films such as...
Flash Brighton and the Omnic Crusaders: Forty-Four Hours Till Midnight - Movie where Flash Brighton abandons werewolf pups to go looking for his brother’s assassin with a time machine.
Flash Brighton and the Omnic Crusaders: The Duel to Infinity - Movie starring Kam Kalu, Thespion 4.0, and A.I. Schylus, among others.
Flexxon Pro Micro T1 reactor - Alternative to the miniature Tobelstein reactor. Cannot maintain a graviton charge for more than a few seconds.
Free Thinkers - A website where Efi gets most of the open source code for her robots.
Harmony Key - A symbolic key presented by Gabrielle Adawe to the leader of the Numbani Omnic Union during the founding of Numbani.
HollaGram (or Hollagram) - Social media service where users can make multimedia posts and other users can award likes, claps, or shares and leave comments.
International Baccalaureate - A real-life instructional programming standard based out of Geneva, Switzerland. Efi is taking senior-high level IB instruction in calculus and physics at age 11.
Ipanema - Real-life neighborhood in Rio. Efi planned to take surf lessons here, and Lúcio’s studio is here.
Junker reactor - A graviton reactor purchased by Orisa when a miniature Tobelstein reactor cannot be found. Cobbled together from mismatched parts, with some graffiti on the side. Efi notes that it is probably irradiated and likely to blow up in Orisa’s face.
Junie (Junior Assistant) - A robot stand-in for social and professional situations, capable of broadcasting holograms of people and recording video/audio with a 360-degree camera. Hologram features can be upgraded with hardlight conversion kits. Invented by Efi Oladele. Holograms produced by Junies are referred to as [name-of-the-person-being-shown]-Junior. For example, Naade-Junior for a hologram of Efi’s friend Naade.
Lagos - The largest real-life city in Nigeria, on the coast.
Maxwell Interpreter - Part of Efi’s robot creation kit. Used for creating simulations by autosorting “eight billion permutations into virtual hash matrices.”
Modulated Biochemical Currency (MBC) - DNA-coded cryptocurrency used on the dark web. About the size of a sand dollar, they wiggle in your hand and contain synthetic blood with unique biological codes in it. Also known as bio gold, wiggle notes, glam clams, scritch scratch...
Nollywood - Real-life slang for the Nigerian film industry
Numbani civic codes - For example, Code 34-342b - Driving a vehicle with an expired registration. Code 92-574j - Pedestrian cross-traffic. Etc.
Numbani Civic Defense Department - Numbani government security forces/police.
Numbani Credit Union - A credit union. Efi stores her grant money here.
Numbani museum exhibits - Historical artifacts, omnic art, a walking tour of Overwatch’s presence in Numbani, Numbani native plants, and a diorama of the Declaration of Unity, among others
Numbani Omnic Union - An omnic representative organization present during the founding of Numbani. The leader was presented the Harmony Key by Gabrielle Adawe.
OmnicCon - A con in Numbani with a costuming contest.
OmniWorx - Company that produces synthetic greases in designer tins with different scents like mint and citrus. The product is used by omnics as an indulgence, such as on Unity Day in Numbani. Efi buys OmniWorx tins for her omnic friends.
OR15 auction - An auction that took place after the defeat of the OR15s by Doomfist at Numbani airport. The Numbani Civic Defense Department sold off decommissioned OR15s in lots of 10 that cost around 20 million naira. Bidders were not informed of the OR15′s defeat at Doomfist’s hands.
Overwatch cartoons - Cartoons Efi describes as “old” that are based on the lives of real Overwatch members like Sojourn.
Overwatch coin bank - Owned by Efi. Features Reinhardt protecting the bank with his rocket hammer.
Overwatch pajamas - Efi owns a pair of these.
Paper notes - Used to avoid digital filters at Efi’s school.
Peace Park - A park in Numbani which had a statue of Gabrielle Adawe. The statue ended up being destroyed by Doomfist.
Precision Core reactor - Alternative to the miniature Tobelstein reactor. Unit is the size of a car.
Rapid-X heliotherapy - A type of therapy that heals broken bones in the span of a few days.
Sky Postal - Delivery company that uses drones. Capable of executing Lightning-Priority deliveries.
Steppe Wanderer - A model of car.
Super Fun Family Time - A “game” Efi’s parents taught her where they hide in the interior bathroom of their flat whenever the warning sirens go off indicating a Doomfist attack.
Tiawo Boulevard - Street in Numbani that intersects Heritage Avenue
Tin Can Island Port - Port where Numbani omnics held a dockworkers’ revolt several years back.
Tonal Abyss - Omnic pop band with 38 members. Broke up at one point due to disagreements about which quantum clock standard to use as a time measure for their music. Reunited at Lúcio’s concert in Numbani.
University of Ibadan, University of Lagos - Real-life Nigerian universities that Efi’s cousin Dayo received acceptances from in the story.
Valor Matrix distributed computing project - Project that rewards loans of processing power with currency.
Version 3.44 VAvmpCompiler - Part of Efi’s robot creation kit. An alternative to a dedicated Maxwell Interpreter box.
Virtual Physician protocol - Downloadable physician protocol accessible to any robot or omnic.
Yankari National Park - Largest national park in Nigeria.
Yoku Voyager - A low-end car model with weak levrims.
Zobo Bot - Robot that makes juice drinks at Efi’s school. Known to be ornery.
21 notes · View notes
sweddleria · 4 years ago
Photo
Tumblr media
Barbary Station
Stearns, R E. Barbary Station. New York Saga Press, 2017.
Genre: Science-Fiction
Subgenre(s): Heist/Survival against (sci-fi) elements
Appeal Terms: action-packed, romantic, world-building
Potential Readers: the venn diagram of fans of Star Trek and The L Word, progressive readers looking for a heist joyride, LGBT+ people that are worried about what Boston Dynamics are up to...
Diversity: Main protagonists are a lesbian couple - in space!
Capsule Review: Adda and Iridian are engineers in love, and on the run. With near-worthless degrees, facing down the barrel of student loan debt, and in an exciting period of somewhat accessible space travel, our protagonists decide to chase after a legend. The legend? Pirates living their best lives on Barbary Station, outside of the law but successfully following through on their various heists and space-buckling adventures. However, upon arrival, it seems reality falls far short of what they envisioned. Instead, they discover outlaws fighting for their lives and subsistence living, skirting around each other as well as a program determined to take them all out - and is capable of creating an army as well as biowarfare. Are these freshly-minted engineers ready to take on feuding pirate gangs, pull off a huge heist, and overtake a murderous A.I.?
Links to Other Reviews: The Verge ; Kirkus ; Tor
Awards: N/A (as far as I could find)
Read-Alikes: Architects of Memory by Karen Osborne, Light Chaser by Peter F. Hamilton and Gareth L. Powell, The Outside Ada Hoffmann
Additional Notes: I was very hyped to read this book, but I don’t think I’ll be recommending this to anyone anytime soon...
0 notes
starryburglar-archive · 6 years ago
Text
Gumi Info
Tumblr media
Full name: Gumi Megpoid
Species: Human || A.I. ( verse dependant )
Age: 19
Sexuality: Demisexual
FC(s): Midori ( Divine Gate ) || Myoui Mina
Bio: Born in a middle-class family, with both of her parents being musicians, it’s no wonder that Gumi soon developed an interested for the art at a young age. Her parents recall how she would sing as often as possible, going along the melodies she would hear on TV or on the radio. Her father taught her how to play the guitar around the time she was 7 or 8, and has gotten better ever since; her mother, on the other hand, taught her how to train her voice and sing as well as she could. It paid off at the end.
Gumi was scouted by a manager working for VOCALOID, a firm that is well known for its rising stars and for being the ones who discovered the pop sensation, Hatsune Miku. And while the man who offered the deal wasn’t from the Crypton wing, the Internet Co. wing was as well-known as the first. It was scary at first – Gumi was aware of how demanding the Idol industry was, even if VOCALOID was one of the few who treated their stars like actual human beings, and wasn’t sure if she was up for all the training. Thankfully, she wasn’t alone during it: there was Gakupo, who became her partner and senior, and also Lily, who joined not too long after her.
A girl who never asks for too much, at least nothing outlandish outside of basic common sense and mutual respect. The idol industry is a scary place, one unsuited for someone like her, but Gumi was determined to be the best she could aim for with her strengths: a solid voice suited for the J-Pop flare, a marketable image that is neither too innocent nor too sexy, and a relatable aura for young teenage girls. There’s a reason most of her hit songs with that demographic are love songs, she’s as much of a hopeless romantic as any other girl.
That isn’t to say she’s all silly songs and happy smiles. Everyone who knows about Megpoid Gumi is aware of how polarizing her discography can be. Sure, the J-Pop flare is always but the content can be a big 180° from song to song – from unrequited love songs to the depths of her inner (and maybe depressed) mind, and then back to being a hopeless romantic. Whenever she’s questioned about the duality of her songs Gumi tries to dismiss it as just nothing more than expressing herself, and thankfully no one during her interviews has ever dared to ask for more – deep inside, Gumi doesn’t feel ready, and doubts she’ll ever feel ready, to be open about that topic.
Note: Heavily headcanon based.
[ MAIN || INBOX || HEADCANONS || VISAGE || MUSINGS ]
                                 -----------------------------------------------
V001: Idol Sensation
Default main verse. An idol sensation who gained more and more popularity and a bigger fanbase as the years went by. That never changed who she is, as Gumi is still a humble girl at heart. Currently working for the company who scouted her Internet Co. and under the VOCALOID label, Gumi is determined to give it her all at all times.
V002: More Than a Synthetizer
Alternate main verse || A.I. verse. A voice synthesizer developed (mainly) for computers but with the recently added AI functions, now it can work on any device. The program MEGPOID can be activated on a mobile phone, a computer, or even the TV! With the right user, Gumi can sing in Japanese, English and even Spanish; don't pressure her, however, she appreciates good communication with her user and if things get complicated she'll peace out for a couple of hours.
V003: An Ordinary Girl
Alternate modern verse // High School verse. Megpoid Gumi is as much of a normal and ordinary girl as she can be, even with the green hair, but gifted with a beautiful singing voice. She adores singing but isn’t sure if she would want to pursue a professional career on it. For now, Gumi'd rather focus on finishing High School and participate as much as possible in the Music Club and the Drama Club.
V004: Young Adult Novice
College verse. A student and part-timer, the best of both worlds. During daytime: Gumi studies long and hard in the area of communication, on her way to work professionally at a radio station with her own segment and everything. During nighttime: Works the night shift at either the local coffee shop of the campus or as one of three DJs at the local disco.
V005: Her Small Journey
A Pokemon verse. Just like any other ten years old, once Gumi reached the age she took off to travel around the world, to begin her own Pokemon journey. Some years have passed since then and, even with her frequent visits to her home town, the young woman enjoys the sights and the long travels. Her team isn't stellar but she loves all of her Pokemon dearly; her team is formed of: 
A Turtwig
A Sylveon
A Gardevoir
A Cacturne
A Mimikyu
An Oddish.
V006: Playful Pixie
A Monster Prom verse. Pixies aren't as common to find as one would imagine. They haven't gone extinct -- they simply have gotten better at hiding, as well as developing other abilities for the evolution of their species. Gumi must be one of the very few pixies who isn't full of mischief at all times, but that doesn't mean she is without her playful moments. Her wings aren't as stand-offish as other pixies, simple transparent ones with a green gleam, almost crystal-like.
V007: Mint Guitars
An Eldarya verse. Far away from the mainlands of Eldarya, there was an island across the sea. This is the land of the Muses, magical humanoid beings that can live for a long time, with strong auras and connected to the arts. Their large island has a sub-country for each specific art, with the capital on the centre ruling them all. In the land of music lived Gumi, a young Muse with a green aura which irradiated the smell of fresh grass, mint and carrots; her preferred instrument is the guitar.
V008: Loud Megaphone
A My Hero Academia verse. If someone were to ask Megpoid Gumi years ago where would she be standing at age 17, the last thing she would answer would be "At the U.A. High School". Everyone knows that's the best High School to learn and become heroes in Japan -- she couldn't even imagine herself in the B class. And yet, she was a student, a Class A student. Now in her third year, Gumi is known for her kindness and for how deadly her quirk can be.
QUIRK: VOICE. Gumi is able to manipulate the pitch of her voice, how loud she wants to be heard, and how ear-screeching it sounds to other people. Mostly a pretty useful quirk most of the time, except for any enemies coming from below the earth. Gumi also cannot force her voice too much or else she'll lose it for a short period of time.
V009: Mistress Glassred
tba Gumina Glassred / Gumi’s character from Madness of Duke Venomania.
                                 -----------------------------------------------
CONNECTIONS
Hatsune Mikuo :: [ Bae ]
:: Gumi ♥ The best thing I never knew I needed [ Mikuo ( kindcstguardian ) ] ::
0 notes
aces-to-apples · 8 years ago
Text
Where Sleepy Dragons Lie
Inspired by this post as well as a general desire to write an SAO AU. This is just a ficlet about the end of a side-quest, so there’s not really a whole lot going on. I hope to eventually figure out an actual story, but for right now, this is all I got.
Wordcount: 1426
Floor 55: Western Mountain
“No,” Locus said with a flat air of finality. The word echoed around the mountain summit, bouncing off the crystal-like shards of ice that seemed to grow out of the rock. He shifted slightly to the side, probably so he could keep an eye on both the path and Tucker himself, and the snow gave way beneath his stupidly high-level boots with a satisfying crunch.
“C’mon,” Tucker whined, completely ignoring said air because c’mon. “I’m a fucking dragon. It’s like a rule or something.”
Locus glared at Tucker over his own black-clad shoulder and let out a huff. “I’m neither a woman nor royalty,” he pointed out as he brushed a few flakes of snow out of his hair to little effect. “Besides, your predecessor guarded nothing and no one besides victory over the floor and some low- to mid-level loot.”
Tucker tried to give the same pouty little huff that usually had Locus giving in to whatever he was asking for at the time, but instead ended up triggering his ice breath-attack instead. The ex-merc barely had time to jump out of the way before the jagged burst of ice-magic struck exactly where he had been standing.
It felt kinda… tingly, if he was being honest—a bit like drinking Sprite right after eating a mint. And, of course, like he had just nearly flash-frozen his partner because he couldn’t figure out how to freaking pout while piloting a hundred-foot-long, nocturnal frost dragon. After he finally figured out how to shut off the damn, well, dam, Tucker bobbed and weaved his head around, trying to spot Locus amid the ice and snow.
He spotted the dude’s gun—a “Silent Assassin”, because he’d decided to double-down on his melodramatic choices even after leaving his crazy-ass guild—before he spotted Locus himself. Crouched low behind an outcrop of ice-crystals, the ex-merc was using the reflective surface of a particularly large, vertical shard to see if it was safe to come back out yet.
“Are you quite finished?” Locus said dryly, when it was clear that Tucker was no longer breathing deadly ice-magic.
If it was possible to duck your head when you were a big-ass, boss-level dragon, Tucker did his best to accomplish it. “Uh, yeah,” he answered sheepishly, avoiding the other man’s eye as he rolled to his feet and strolled over to pick up his sniper rifle. “Sorry about that, man. Haven’t quite got the hang of this fuckin’ thing yet.”
“I hadn’t noticed,” Locus muttered under his breath, sounding vaguely mutinous.
Tucker hoped that Church would come back soon, just in case their resident mercenary decided to go back to his usual method of problem-solving from back when he was in his guild. Charon Industries’ usual problem-solving method, of course, being murder.
As if summoned by the thought, a blaze of indigo light shone from somewhere in the ice field, creating a luminous effect as it bounced from one shard of ice to another all around the mountain summit. Immediately following the pretty light-show was the familiar, frustrated shout of “God fucking dammit!” that typically accompanied Tucker’s dubious best friend.
“You alive over there?” Tucker called, knowing that Locus didn’t like or care about Church in any way, and thus wouldn’t check.
“Oh, fuck off, wyrm-boy!” came the much-expected reply as Church painstakingly picked himself up off the frozen ground and trudged over to them, brushing snow off his light-blue guild armor while he walked.
Tucker smirked as much as his could without lips. “You know, for an in-game A.I., you kinda suck at pretty much all of it,” he remarked once said A.I. got within regular human hearing range.
Church narrowed his eyes. “So, done anything else mind-bendingly stupid while I was gone?” he spat, obviously upset by more than just Tucker’s sass.
“Um,” Tucker hedged, trying not to shuffle any of his four feet or rustle his wings guiltily while he avoided Locus’ eye and Church own victorious smirk. “No?” The leader of the Ronbaru Blues didn’t look convinced so Tucker quickly changed the subject. “Anyway. You figure out if this whole thing is a game-breaking glitch or just the usual B-G-O bullshit?”
“Ah. Ahem.” This time it was Church’s turn to avoid their eyes and he did so with admirable dedication, even going so far as to manoeuvre himself so that one of Tucker’s forelegs and a wing was between himself and Locus. “Funny story about that…”
Locus said nothing, didn’t even glance Church’s way, but he did very pointedly peer through the scope and begin checking over his gun. Amused as always by how very much his two favorite people in BGO low-key despised each other, Tucker wriggled backwards and lowered his long neck until his chin rested lightly on the snow-covered ground.
“Yes?” he prompted, wondering what fresh hell Blood Gulch Online had decided to cook up today, even when “today” became “yesterday” as the dull grey sky began to subtly lighten.
Church cleared his throat and dragged the toe of his boot through the snow.. “So…” he said awkwardly. “Um. Right, yeah, so. It’s kinda. Both?” Hit by an unexpected wave of exhaustion, Tucker slowly blinked one giant red eye at the scruffy-looking A.I. and waited for him to continue. Church made an aggrieved noise and did so after a few seconds, sounding more than a little embarrassed. “According to what I could dig up in the archives, whatever happened to you is a cross between an out-of-date feature of the game and a software bug, compounded by an… earlier mistake in the programming made by yours truly…”
Locus had stopped messing around with his big-ass sniper rifle and was now outright glaring at the Blue leader, which had Tucker’s brain struggling to process what was being said purely out an instinctual need to understand what put that expression on his not-boyfriend’s face. When it clicked, he let out a snort that nearly triggered his breath-attack again.
“I’m a motherfucking dragon because the game glitched and pulled a Highlander?” he laughed, somewhere between disbelieving and genuinely entertained. It was getting harder and harder to keep his eyes open as the light got brighter, and the laughter held more than a tinge of hysteria. “And the game glitched and pulled a Highlander because you glitched and pulled a Tron?”
“Yeah, yeah, laugh it up, jackass,” he heard Church grumble, eyes now completely closed, and his voice sounded weirdly far-away. “Apparently if you kill the dragon without a weapon made out of the proper metal, a supposed-to-be-deleted passive ability of X’rphan the White drains all your hit-points to heal itself and resets you back to Floor One. But, since whenever you die you can’t just reset, the byte-sized A.I. in charge of the floor tried to fix the problem itself and decided that your consciousness would just be transferred to the dragon as well. Is… is he fucking sleeping?”
Tucker tried to pry his eyes open and show that he was paying attention, but couldn’t manage it. With great effort, he opened his mouth to say something, but was cut off by Locus’ deep rumble, and decided to just let it go in favor of listening to the sounds instead of the words.
Church had no such impulse, however, and audibly threw up his hands. “Ugh, whatever, I’ll just go get the metal from the cave and have Lopez forge the fucking sword while he’s out. Then we can poke him when he wakes up and he’ll revert back to his original form. Not that it’s a big improvement…”
The sound of their glorious leader stomping away and muttering obscenities under his breath met Tucker’s ears and he used his dragon-sensitive hearing to track the A.I.’s progress through the mountain. A second later, he heard the crunch of snow, felt the heat of Locus leaning against his shoulder, and sighed in sleepy contentment.
“Y’sure I can’t c’nvince you?” Tucker managed to slur through the fog of sleep. He could both feel and hear Locus’ rusty chuckle. “Y’d be a great princ’ss…”
“If your blacksmith isn’t a high enough level to work with the crystallite ingot,” Locus rumbled against his scales, “I will allow you to be as stereotypical as you wish until we find a master to forge an appropriate weapon.”
Tucker gave a pleased hum, and wondered if there were any decrepit castles on Floor 55 that they could haunt together, just before he was finally pulled all the way under.
7 notes · View notes
tevgreenai-blog · 8 years ago
Text
AU Selfpara: In Another Life, Another Dream
Kailey Johnson-Johar scrubbed her palm across the bridge of her nose, trying to chase away the throb of tension that had blossomed there halfway through the salad and remained through dessert and after-dinner drinks. She loved her mothers, she did, but god could they be tiring. She’d sent the kids to bed at last, kissed Dee on the cheek and said she would take care of the dishes, go relax, and let herself sink down on the plush sofa in the living room for a few minutes of respite before she got started on the clean-up. She needed the break.
She did love her mothers, but she wished the both of them loved their work a little less. Just once she’d like to have them visit without any mention of Mama-Pri’s quantum theories or Mama-Carol’s lattice matrixes; without any interrogation of the kids about their schoolwork and grades, about what kind of science their teachers were sharing with them, about GPAs and college plans. Kailey understood that they were both a little disappointed that she had gone into vanilla mathematics instead of chasing some more exciting, cutting-edge field like they had, but that was no reason to push their grandkids so hard. They weren’t even teenagers yet, and Mama-Carol was already talking about MIT’s standards for admittance! Mama-Pri was no better; she had taken her undergrad at Brown, so she considered herself “the open-minded one,” but she still made it clear that she thought any course of study that elevated theatrics or color theory over quantum physics was a waste of time.
“Mom,” Kailey had pointed out for the seventieth time at least, “it’s fifth grade, not junior year of high school. Of course everybody is more interested in the school play than in Newton’s Aerodynamics. Cut some slack, okay? Now, do you want tickets or not? Neither of the kids have a lead role, so if you want to skip it--”
To their credit, both proud grandmamas had been outraged at the very idea of passing-up a chance to see their precious darlings on stage, and Kailey had smiled with grim victory as she made the note to call Ms. Wu in the morning to buy four tickets in the fourth row, not two, but she couldn’t help but remember all the times her mothers had promised to come see her plays and concerts and field hockey matches only for one or the other to cancel at the last minute because the Ebrahim-Jackson Collider had just done something unprecedented, or because the Wagman-Savage Singularity Simulation was acting up, or any others of a hundred various important but still disappointing reasons. Maybe she would also call Uncle Choi and see if he and his new protégée, that sweet boy with the impossible hair whose name Kailey could never pronounce right, wanted to come to an elementary school play if she bribed them with wine and pie afterward. The odds that all four scientists would have to rush away were slim to none, and if there were six people there instead of four, the absence of one (or maybe even two) would be a lot less noticeable from the stage.
The kettle whistled and Kailey sighed with relief; a cup of mint and valerian tea would chase away this not-quite-a-headache and help her wind-down enough so she wouldn’t be up tossing and turning half the night. She didn’t want to keep Dee awake either; the art gallery was having a showing of some new talent tomorrow night so they would need all hands on deck there first thing in the morning to get all the decorations and artist-statements arranged properly. Kailey sipped the hot, drowsy drink and let herself smile. If the worst family drama she had to complain about was overly-interested grandparents, she had it good; and at least with Dee’s parents having moved back to India three years ago there was an ocean between them and any pestering they could do. No, life was pretty good, and Kailey didn’t really have anything to fret over...but of course as she’d told her mothers, the kids weren’t teenagers yet.
The Borjigin-Lavelle device booted-up with the usual blinking lights and whirl of numbers flickering across the various display screens faster than the human eye could track. That didn’t matter; all the data was being recorded, was always being recorded. Operating System 3.7 cycled through its modified start-up perimeters and then, as its programming dictated, said, “Query: input?”
Choi and Yasmin both groaned. “So much for colloquialisms,” spat the younger of the two, plopping her chin in her hands. Her mentor smacked her shoulder with his plastic stylus. “Enough of that!” Choi scolded. “No defeatism so early in the morning, if you please!”
Yasmin rolled her eyes but sat up straight again. “No offense Professor Borjigin,” she said sourly, “but if you don’t want defeatism in the morning, maybe you should wait until the afternoon to boot-up the creature.”
“You know I don’t like you calling it that,” Choi said, his voice mild as he leaned in close to the screens and squinted at the scrolling lines of code. In many ways what he was doing was mind-reading; at least, he was reading, and what he was reading were the contents of a mind. It was just that the mind in question was a set of programming instructions that he and Yasmin had spent the past four weeks coding. If they were running correctly, they should have told the Borjigin-Lavelle device to request input...but in a less formal, less computerized fashion. Anyone could program a computer to react to input and stimuli; what he was trying to do was program a computer to take on the brain patterns of a person. And not just a generic approximation of a person, like most A.I., but rather a specific person whose brain patterns had been downloaded and synthesized into its digital carapace. In many ways that was the easy part; it was the upload back to the -- as his new research assistant persisted in calling it -- creature that was giving him trouble and had been doing so for over thirty years now.
Yasmin had only been working on the project for the past three, after Dr. Borjigin had selected her to be his research assistant for her post-doc work at Los Alamos National Laboratory. Like the many, many research assistants he had had before, she would leave when her contract was over and move on to do her own research, maybe at Los Alamos but more likely somewhere else so she could get new experiences at other labs and with other scientists. She had hoped to be the one who would get to put her name on the final stage of the project’s success, but so far it wasn’t working out that way. That was why she had taken to calling the device “the creature” -- a sort of gallows humor, in more ways than one. Of course, she knew that that made her Igor in this story, which was a little less funny, especially when her now-ex boyfriend had pointed that out, but when you had rolled the post-doc dice and lost, you had to take your laughs where you could.
“Query: ought I to dislike being called ‘the creature’ as well?”
They both froze and turned to stare at the computer speaker from which the voice had issued. After a long, tense moment Choi muttered, “Tell me you didn’t program that response in there because you thought it would be funny?”
Yasmin shook her head. “I almost wish I had thought to,” she confessed, “that would have been hilarious.”
“Ah.” Choi did not sound amused; instead he sounded awed. “So then what you’re telling me is that, since I did not program it to ask that, and you did not program it to ask that...?”
Yasmin raised and lowered her head in a slow, slow nod. “Right,” she said. “I think...it told itself to ask that.”
“Should I repeat the query?” the program asked. “Was my statement unclear? Or my volume miscalibrated? I can increase the output.” A shrill, electronic shriek began and the speakers popped. Both scientists jumped.
“No, no,” Choi said hurriedly, waving his hands frantically toward the speaker as though to shoo away the piercing sound; Yasmin clamped her hands tight over her ears. “That is not necessary! We heard you.” A wild idea occurred to him -- was the device making jokes? Admittedly with an astonishingly dry sense of humor, but then again, the brain patterns he had digitized and downloaded had belonged to someone known for possessing a blisteringly dry sense of humor...
“Then I await your answer.”
Choi licked his lips, flashed a glance at Yasmin who shrugged, and then turned to face the speaker again. He knew that the device’s ocular senses were located in the two cameras tacked to the top of the coding screens, but some innate human urge insisted that he direct his response to the source of the sound -- the speaker -- even though he knew he was being illogical by doing so. “The answer,” he said slowly, “is that you should mind only if you prefer to be called something else.”
“Ah.” The lines of text flashed by on the monitors even faster now. After a while they slowed to the earlier, eye-blurring pace and the device spoke again: “In that case,” it said tonelessly, “I should like to be called Tev. Yes. That is good. Tev.”
“All right...Tev,” Choi said, after a pause in which Yasmin scrambled to grab an input stylus and the tablet upon which their file of prepared questions had been loaded, “do you mind if I ask you a few questions?”
“Not at all,” said the Bojigin-Lavelle device -- said Tev. “Please, go ahead.”
“Well. Good. Question one...”
4 notes · View notes
crystalsandbubbletea · 2 years ago
Text
(Small warning that this post contains swearing and mentions of death)
I wanted to write a page from Berat's journal as a coping mechanism for losing that drabble :'}
March 1, 2549
Haven't written in this old thing for quite a while now.
Today, they gave me a A.I, A.I Program Mint.
Why the hell did they do that?
I'm an ODST, I don't think ODST's are supposed to be given a A.I.
They told me that I need her because my mental state will go downhill.
No, I know myself well enough.
Enaya's only in a coma, thank the divines. She didn't lose her life unlike Mohana.
Poor Caveh, he didn't handle the news well when it was revealed Mohana would never wake up.
Those damn insurrectionists.
Then again, that group of insurrectionist isn't like the other insurrectionists I fought.
Their main goal?
Turmoil, despair, destruction... Basically anything negative.
Mohana was a medic, one doesn't kill a medic.
Except for those bastards...
I was relieved that Izmir and Enaya were both fine, except Enaya is in a coma.
Yekatrina told me that the chances of Enaya waking up are pretty good, but I should be prepared for when those chances drop.
I normally trust Yekatrina because she's the best damn medic I have ever met, but I won't this time. Enaya's a fighter. Even though she's a civilian, she's got that fire in her to keep going against all odds. The chances of her dying won't go up.
I'm going to have to stop writing now, Mint's pestering me about how I have a mission in one hour.
-Berat Adil Emre Yukime
(Yeah I'm definitely gonna do more things involving Berat in the future-)
0 notes
webanalytics · 8 years ago
Text
The Artificial Intelligence Opportunity: A Camel to Cars Moment
Over the last couple years, I’ve spent an increasing amount of time diving into the possibilities Deep Learning (DL) offers in terms of what we can do with Artificial Intelligence (AI). Some of these possibilities have already been realized (more on this later in the post). And, I could not be more excited to see them out in the world.
Through it all, I’ve felt there are a handful of breath-taking realities that most people are not grasping when it comes to an AI-Powered world. Why the implications are far deeper for humanity than we imagine. Why in my areas of expertise, marketing, sales, customer service and analytics, the impact will be deep and wide. Why is this not yet another programmatic moment. Why the scale at which we can (/have to) solve the problems is already well beyond the grasp of the fundamental strategy most companies follow: We have a bigger revenue opportunity, but we don’t know how to take advantage? Let’s buy more hamster wheels, hire more hamsters and train them to spin faster!
Today I want shed some light on these whys, and a bit more. My goal is to try to cause a shift in your thinking, to get you to take a leadership role in taking advantage of this opportunity both at a personal and professional level.
I’ve covered AI earlier: Artificial Intelligence: Implications On Marketing, Analytics, And You. You’ll learn all about the Global Maxima, definitions of AI/ML/DL, and the implications related to the work we do day to day. If you’ve not read that post, I do encourage you to do so as it will have valuable context.
In this post, I’ve organized my thoughts into these six clusters:
1: What’s the BFD? 2: Wait. So are we “doomed”? 3: AI: A conversation with a skeptic. 4: Ok, ok, ok, but what about the now? (Professional) 5: Ok, ok, ok, but what about the now? (Personal) 6: Summary.
There is a deliberate flow to this post, above. If you are going to jump around, it is ok, but please be sure to read the section below first. You won’t regret it.
Ready to have your mind stretched? Let’s go!
What’s the BFD?
I’m really excited about what’s in front of us. When I share that excitement in my keynotes or an intimate discussion with a company’s board of directors, I make sure I stress two especially powerful concepts that I have come to appreciate about the emerging AI solutions: Collective Continuous Learning + Complete Day One Knowledge.
They are crucial in being able to internalize the depth and breadth of the revolution, and why we strengths AI brings are a radical shift beyond what humans are capable of.
The first eye-opening learning for me came from the Google Research team’s post on Learning from Large-Scale Interaction.
Most robots are very robotic because they follow a sense-plan-act paradigm. This limits the types of things they are able to do, and as you might have seen their movements are deliberate. The team at Google adopted the strategy of having a robot learn own its own (rather than programming it with pre-configured models).
The one-handed robots in this case had to learn to pick up objects.
Initially the grasping mechanism was completely random – try to imagine a baby who barely knows they even have a hand at the end of their shoulder. Hence, you’ll see in the video below, they rarely succeed at the task at hand. ;)
At the end of each day, the data was collected and used to train a deep convolutional neural network (CNN), to learn to predict the outcome of each grasping motion. These learnings go back to the robot and improve its chances of success.
Here’s the video…
youtube
(Play on YouTube)
It took just 3,000 robot-hours of practice to see the beginnings of intelligent behavior.
What’s intelligent behavior of a CNN powered one-handed robot?
Among other things, being able to isolate one object (a stapler) to successfully pick-up a Lego piece. You’ll see that at 15 seconds in this video…
youtube
(Play on YouTube)
Or, learning how to pick up different types of objects (a dish washing soft sponge, a blackboard eraser, or a water glass  etc.).
I felt a genuine tingling sensation just imagining a thing not knowing something and it being able to simply learn. I mean pause. Just think about it. It started from scratch – like a baby – and then just figured it out. Pretty damn fast. It truly is mind-blowing.
There were two lessons here. The first related to pure deep learning and its amazingness, I was familiar with this one. The second was something new (for me). This experiment involved 14 one-handed robot arms. While not a massive number, the 14 were collectively contributing data from the start – with their many failures. The end of day learnings by the convolutional neural network were using all 14. And, the next day, all 14 started again with this new level of collective wisdom.
For a clear way for me to capture this lesson, I call this Collective Learning.
It is very powerful.
Think of 14 humans learning a new task. Peeling an apple. Or, laying down track for a railroad. Or, programming a new and even more frustrating in-flight entertainment menu for Air Canada (who have the worst one known to mankind).
Every human will do it individually as well as they can – there will be the normal bell curve of competency. It is entirely possible, if there are incentives to do so, that the humans who are better in the group will try to teach others. There will be great improvement if the task is repetitive and does not require imagination/creativity/intrinsic intelligence. There might be a smaller improvement if the task is not repetitive and requires imagination/creativity/intrinsic intelligence.
In neither case will there be anything close to Collective Learning when it comes to humans.
Humans also do not posses this continuous closed loop: Do something. Check outcome (success or failure). Actively learn from either, improve self. Do something better the next time.
Collective Continuous Learning. An incredible advantage that I had simply not thought through deeply enough.
Here’s the second BFD.
Machine Learning is already changing lots of fields, the one I’m most excited about is what’s happening in healthcare. From the ability to speed up discovery of new medicines to the unbelievable speed with which Machine Learning techniques are becoming particularly adept at diagnosis (think blood reports, X-rays, cancers etc.). 
An example I love. 415 million diabetic patients worldwide are at risk of Diabetic Retinopathy (DR) – the fastest growing cause of blindness. If caught early, the disease is completely treatable. The problem? Medical specialists capable of detecting DR are rare in many parts of the world where diabetes is prevalent.
Using a dataset of 128,000 images Google’s  Accelerated Science Team trained a deep neural network to detect DR from retinal photographs. The results delivered by the algorithm (black curve) were slightly better than expert ophthalmologists (colored dots)…
Specifically the algorithm has a F-score of 0.95 and the median F-score of the eight expert ophthalmologists was 0.91.
As richer datasets become available for the neural network to learn from, as 3D imaging technology like Optical Coherence Tomography becomes available all over the world to provide more detailed view of the retina, just imagine how transformative the impact will be.
Literally millions upon millions of people at risk of blindness will have access to AI-Powered technology that can create a different outcome for their life  – and their families.
#omg
A recent incredible article on this topic is in my beloved New Yorker magazine: A.I. VERSUS M.D. You *should* read it. I’ll jump to a part of the article that altered my imagination of possibilities.
An algorithm created by Sebastian Thrun, Andre Esteva and Brett Kuprel can detect keratinocyte carcinoma (a type of skin cancer) by looking at images of the skin (acne, a rash, mole etc.). In June 2015 it got the right answer 72% of the time, two board-certified dermatologists got the right answer for the same images 66% of the time.
Since then, as they outlined in their report published in the prestigious journal Nature, the algorithm has gotten smarter across even more skin cancer types – and consistently performs better than dermatologists.
Most cancers are fatal because they are detected too late, just imagine the transformative impact of this algorithm sitting in the cloud easily accessible to all humanity via their five billion smartphones. This dream come true: low-cost universal access to vital diagnostic care.
Oh, and here’s a profoundly under-appreciated facet of all this. These health algorithms (including and beyond the one above), are incredible at corner cases, the rare long-tail anomalies. They don’t forget what they have seen once or “rarely.”
This is just a little bit of context for the key point.
A dermatologist in a full-time practice will see around 200,000 cases during her/his lifetime. With every case she sees, she’ll ideally add to her knowledge and grow her diagnostic skills.
Our very human problem is that every new dermatology resident starts almost from scratch. Some textbooks might be updated (while comfortably remaining a decade of more behind). Some new techniques – machines, analytical strategies – might be accessible to the resident. But, the depth and breadth of knowledge acquired by the dermatologist at the end of her career with 200k cases, is almost completely inaccessible to the new resident. Even if they do a residency at an hospital or with a old dermatologist, a newly minted dermatologist will only be a little better than when the old one left school.
Consider this instead: The algorithm above processed 130,000 cases in three months! And every day it will get smarter as it’ll have access to the latest (and more) data. Here though is the magical bit. Every single new algorithm we bring online will have total access to all knowledge from previous algorithms! It’s starting point will be, what I call, Complete Day One Knowledge.
As it gets more data to learn from, as it has access to more compute power, it will get smarter and build upon that complete knowledge. The next version of the algorithm will start with this new high mark.
There is nothing equivalent to Complete Day One Knowledge when it comes to humans.
Combine having Complete Day One Knowledge with Collective Continuous Learning (networked hardware or software all learning at the same time) and it should take you five seconds to realize that we are in a new time and place.
Whatever form AI takes, it will always have access to complete knowledge and through the network each instance will make all others smarter every single instance/moment of its existence.
Humans simply can’t compete.
That’s the BFD.
Stop. Think. If you disagree even slightly, scroll back up and read the post again.
It is imperative that you get this not because of what will happen in 10 years, but what is happening today to the job you have. If you still disagree, scroll down and post a comment, I would love to hear your perspective and engage in a conversation.
Bonus 1: There is an additional valuable lesson related to open-loop grasp selection and blindly executing it vs. incorporating continuous feedback (50% reduction in failure rates!). The two videos are worth watching to see this in action.
Bonus 2: While are on the subject of objects… Relational reason is central to human intelligence. Deepmind has had recent success in building a simple neural network module for relational reasoning. This progress is so very cool. Additionally, I was so very excited about the Visual Interaction Network they built to mimic a human’s ability to predict. (If you kick a ball against the wall, your brain predict what will happen when the ball hits the wall.) The article is well worth reading: A neural approach to relational reasoning. Success here holds fantastic possibilities.
Wait. So are we “doomed”?
It depends on what you mean by doomed but: Yes. No. Yes, totally.
Artificial Intelligence will hold a massive advantage over humans in the coming years.
In field after field due to Collective Continuous Learning and Complete Day One Knowledge (not to mention advances in deep learning techniques and hardware :)), AI will be better at frequent high-volume tasks.
Hence, the first yes.
Neuralink at the moment is a concept (implantable brain-computer interface). But many experts (like Ray Kurzweil) believe some type of connection between our human brain and “intelligence, data, compute power in the cloud” will be accessible to humans.
I humbly believe that when that happens, over the next few decades (think 2050), humans could get to parity with AI available at that time. We might even have an advantage for some time (if only because I can’t let go of the thought that our brains are special!).
Hence, the no.
As we head towards the second half of the current century, AI will regain the lead again – and keep it for good. I don’t have the competency to judge if that will be AGI or Superintellignece or some other variation. But, with all other computing factors changing at an exponential rate it is impossible that intelligence will not surpass the limitations of humans and human brains (including the one with a version of Neuralink).
Here’s just one data-point from Jurgen Schmidhuber: Neural networks we are using for Deep Learning at the moment have around a billion neural connections compared with around 100,000 billion in the human cortex. Computers are getting 10 times faster every 5 years, and unless that trend breaks, it will only take 25 years until we have a recurrent neural network comparable with the human brain. Just 25 years.
Hence, the yes totally.
I have a personal theory as to what happens to humans as we look out 150 – 200 years. It is not relevant to this post. But, if you are curious, please ask me next time you see me. (Or, sign up for my weekly newsletter: The Marketing < > Analytics Intersect)
AI: A conversation with a skeptic.
Surely some of you think, to put it politely, that I’m a little bit out there. Some of you’ve heard the “hype” before and are deeply skeptical (AI went through a two decade long tundra where it failed to live up to every promise, until say 2010 or so). Some of you were promised Programmatic was AI and all it did was serve crap more efficiently at scale!
I assure you, skepticism is warranted.
Mitch Joel is the Rock Star of Digital Marketing, brilliant on the topic of media, and a very sweet human being. Amongst his many platforms is a fantastic podcast called Six Pixels of Separation. Our 13th podcast together was on AI. Mitch played the role of the resident skeptic and I played the role of, well, the role you see me play here.
If you can think of a skeptical question on this topic, Mitch asked it. Give the podcast a listen…
(Play at Six Pixels of Separation)
As you’ll hear multiple times, a bunch of this is a matter of thinking differently about the worldview that we’ve brought with us thus far. I share as many examples and metaphors I could to assist you in a journey that requires you to think very differently.
If you are still skeptical about something, please express it via comments below. Within the bounds of my competency, I’ll do my best to provide related context.
Ok, ok, ok, but what about the now? (Professional)
While I look at the future with optimism (even 150 years out for humans), what I’m most excited about is what Machine Learning and Deep Learning can do for us today. There are so many things that are hard to do, opportunities we don’t even know exist, the ability to make work that sucks the life out of you easier, better, smarter, or gone.
In a recent edition of my newsletter, TMAI, I’d shared a story and a call to arms with specific recommendations of what to do now. I’ll share it with you all here with the hope that you’ll jump-start your use of Machine Learning today…
I lived in Saudi Arabia for almost three years. Working at DHL was a deeply formative professional experience. My profound love of exceptional customer service, and outrage at awful customer experiences, can be directly sourced to what I learned there.
Saudi Arabia is a country that saw massively fast modernization. In just a few years, the country went from camels to cars. (I only half-jokingly say that Saudis still ride their cars like camels – and it was scary!).
Think about it for a moment.
From camels to cars. No bicycles. No steam engines. None of the other in-betweens other parts of the world systematically went through to get to cars. They were riding camels, then they were riding cars. Consider all the implications.
We stand at just such a moment in time in the business world. You know just how immersed and obsessed I am with Artificial Intelligence and the implications on marketing and analytics. It truly is a camels to cars type moment in my humble opinion (it might even be a camels to rockets moment, but let me be conservative).
Yet, executives will often give me examples of things they are doing, and they feel satisfied that they are with it, they are doing AI. When I probe a bit, it becomes clear very quickly that all they are doing is making the camels they are riding go a little faster.
That all by itself is not a bad thing – they are certainly moving faster. The problem is they are completely missing the opportunity to get in the car (and their competitors are already in cars).
It is important to know the difference between the two – for the sake of job preservation and company survival.
Here are a handful of examples to help you truly deeply internalize the difference between these two critical strategies…
If you are moving from last-click attribution to experimenting with first-click or time-decay, this is trying to make your camel go faster. Using ML-Powered Data-Driven Attribution and connecting it with your AdWords account so that action can be taking based on DDA recommendations automatically, you are riding a car.
(More on this: Digital Attribution's Ladder of Awesomeness)
If you are moving to experimenting with every button and dial you can touch in AdWords so that you can understand how everything works and you can prove increase in conversions while narrowly focusing on a few keywords, you are making your camel go faster. Switching to ML-powered Smart Targeting, Smart Creative and Smart Bidding with company Profit as the success criteria, for every relevant keyword identified automatically by the algorithm, you are riding a car.
Staffing up your call center to wait for calls from potential customers is making your camel go faster. Creating a neural-network that analyzes all publicly available data of companies to identify which ones are going to need to raise debt, and proactively calling them to pitch your company's wonderful debt-financing services is riding a car.
Hand picking sites to show your display ads via a x by x spreadsheet that is lovingly massaged and now has new font and one more column on Viewability, is making your camel go faster. Leveraging Machine Learning to algorithmically figure out where your ad should show by analyzing over 5,000 signals in real time for Every Single Human based on human-level understanding (die cookies die!), is riding a fast car.
(To see a delightful rant on the corrosive outcomes from a Viewability obsession, and what you might be sweeping under the carpet, see TMAI #64 with the story from P&G.)
Asking your Analysts to stop puking data, sorry I mean automate reporting, and send insights by merging various data sets is making the camel go faster. Asking your Analysts to just send you just the Actions and the Business Impact from those Actions is riding a car. Asking them to shift to using ML-powered products like Analytics Intelligence in GA to identify the unknown unkonwns and connecting that to automated actions is riding a rocket.
If you are explicitly programming your chatbot with 100 different use cases and fixed paths to follow for each use case to improve customer service, that is making the camel go faster. If you take the datasets in your company around your products, problems, solutions, past successful services, your competitors products, details around your users, etc. etc. and feed it to a deep learning algorithm that can learn without explicit programming how to solve your customer's service issues, you are riding a car.
I, literally, have 25 more examples… But, you catch my drift.
I do not for one moment believe that this will be easy, or that you'll get a welcome reception when you present the answer. But, one of two extremely positive outcomes will happen:
1. You'll get permission from your management team to stop wasting time with getting the camel to go faster, and they'll empower you to do something truly worth doing for your company. Or…
2. You'll realize that this company is going to suck the life out of your career, and you'll quietly look for a new place to work where your life will be filled with meaning and material impact.
Win-Win.
Hence, be brutally honest. Audit your current cluster of priorities against the bleeding edge of possible. Then answer this question: Are you trying to make your camel go faster, or jumping on to a car?
While Machine Learning has not solved world hunger yet, and AGI is still years away, there are business-altering solutions in the market today waiting for you to use them to create a sustainable competitive advantage.
Ok, ok, ok, but what about the now? (Personal)
If this post has not caused you to freak-out a tiny bit about your professional path, then I would have failed completely. After all, how can the huge amount of change mentioned above be happening, and your job/career not be profoundly impacted?
You and I have a small handful of years when we can create a personal pivot through an active investment of our time, energy and re-thinking. If we miss this small window of opportunity, I feel that the choice will be made for us.
This blog is read by a diverse set of people in a diverse set of roles. It would be difficult to be personal in advice/possibilities for each individual.
Instead, here’s a slide I use to share a collection of distinct thought during my speaking engagements on this topic…
In orange is a summary of what “Machines” and humans will be optimally suited for in the near-future. (Note the for now.) Frequent high-volume tasks vs. tackling novel situations.
In green, I’m quoting Carlos Espinal. I loved how simply and beautifully he framed what I imagine when I say tackle novel situations.
Over the last 24 months, I’ve made an whole collection of conscious choices to move my professional competencies to the right of the blue line. That should give me a decade plus, maybe more if Ray is right about Cloud Accessible Intelligence. Beyond that, everything’s uncertain. :)
Summary.
I hope you noticed I ended the above paragraph with a smiley. I’m inspired by the innovation happening all around us, and how far and wide it is being applied. I am genuinely excited about the opportunities in front of us, and the problems we are going to solve for us as individuals, for our businesses, for our fellow humans and for this precious planet.
In my areas of competence, marketing, analytics, service and sales, I can say with some experience that change is already here, and much bigger change is in front of us. (I share with Mitch above how long I think Analysts, as they are today, will be around.) I hope I’ve convinced you to take advantage of it for your personal and professional glory.
(All this also has a huge implication on our children. If you have kids, or play an influencing role in the life of a child, I’d shared my thoughts here: Artificial Intelligence | Future | Kids)
The times they are a changin'.
Carpe diem!
As always, it is your turn now.
Were Collective Learning and Complete Day One Knowledge concepts you’d already considered in your analysis of AI? Are there other concepts you’ve identified? Do you think we are doomed? Is your company taking advantage of Deep Neural Networks for marketing or analytics or to draw new value from your core back-office platforms? What steps have you taken in the last year to change the trajectory of your career?
Please share your insights, action-plans, critique, and outlandish predictions for the future of humanity, :), via comments below.
Thank you.
The Artificial Intelligence Opportunity: A Camel to Cars Moment is a post from: Occam's Razor by Avinash Kaushik
from Occam's Razor by Avinash Kaushik http://ift.tt/2wbcNae #Digital #Analytics #Website
0 notes
nathandgibsca · 8 years ago
Text
The Artificial Intelligence Opportunity: A Camel to Cars Moment
Over the last couple years, I’ve spent an increasing amount of time diving into the possibilities Deep Learning (DL) offers in terms of what we can do with Artificial Intelligence (AI). Some of these possibilities have already been realized (more on this later in the post). And, I could not be more excited to see them out in the world.
Through it all, I’ve felt there are a handful of breath-taking realities that most people are not grasping when it comes to an AI-Powered world. Why the implications are far deeper for humanity than we imagine. Why in my areas of expertise, marketing, sales, customer service and analytics, the impact will be deep and wide. Why is this not yet another programmatic moment. Why the scale at which we can (/have to) solve the problems is already well beyond the grasp of the fundamental strategy most companies follow: We have a bigger revenue opportunity, but we don’t know how to take advantage? Let’s buy more hamster wheels, hire more hamsters and train them to spin faster!
Today I want shed some light on these whys, and a bit more. My goal is to try to cause a shift in your thinking, to get you to take a leadership role in taking advantage of this opportunity both at a personal and professional level.
I’ve covered AI earlier: Artificial Intelligence: Implications On Marketing, Analytics, And You. You’ll learn all about the Global Maxima, definitions of AI/ML/DL, and the implications related to the work we do day to day. If you’ve not read that post, I do encourage you to do so as it will have valuable context.
In this post, I’ve organized my thoughts into these six clusters:
1: What’s the BFD? 2: Wait. So are we “doomed”? 3: AI: A conversation with a skeptic. 4: Ok, ok, ok, but what about the now? (Professional) 5: Ok, ok, ok, but what about the now? (Personal) 6: Summary.
There is a deliberate flow to this post, above. If you are going to jump around, it is ok, but please be sure to read the section below first. You won’t regret it.
Ready to have your mind stretched? Let’s go!
What’s the BFD?
I’m really excited about what’s in front of us. When I share that excitement in my keynotes or an intimate discussion with a company’s board of directors, I make sure I stress two especially powerful concepts that I have come to appreciate about the emerging AI solutions: Collective Continuous Learning + Complete Day One Knowledge.
They are crucial in being able to internalize the depth and breadth of the revolution, and why we strengths AI brings are a radical shift beyond what humans are capable of.
The first eye-opening learning for me came from the Google Research team’s post on Learning from Large-Scale Interaction.
Most robots are very robotic because they follow a sense-plan-act paradigm. This limits the types of things they are able to do, and as you might have seen their movements are deliberate. The team at Google adopted the strategy of having a robot learn own its own (rather than programming it with pre-configured models).
The one-handed robots in this case had to learn to pick up objects.
Initially the grasping mechanism was completely random – try to imagine a baby who barely knows they even have a hand at the end of their shoulder. Hence, you’ll see in the video below, they rarely succeed at the task at hand. ;)
At the end of each day, the data was collected and used to train a deep convolutional neural network (CNN), to learn to predict the outcome of each grasping motion. These learnings go back to the robot and improve its chances of success.
Here’s the video…
youtube
(Play on YouTube)
It took just 3,000 robot-hours of practice to see the beginnings of intelligent behavior.
What’s intelligent behavior of a CNN powered one-handed robot?
Among other things, being able to isolate one object (a stapler) to successfully pick-up a Lego piece. You’ll see that at 15 seconds in this video…
youtube
(Play on YouTube)
Or, learning how to pick up different types of objects (a dish washing soft sponge, a blackboard eraser, or a water glass  etc.).
I felt a genuine tingling sensation just imagining a thing not knowing something and it being able to simply learn. I mean pause. Just think about it. It started from scratch – like a baby – and then just figured it out. Pretty damn fast. It truly is mind-blowing.
There were two lessons here. The first related to pure deep learning and its amazingness, I was familiar with this one. The second was something new (for me). This experiment involved 14 one-handed robot arms. While not a massive number, the 14 were collectively contributing data from the start – with their many failures. The end of day learnings by the convolutional neural network were using all 14. And, the next day, all 14 started again with this new level of collective wisdom.
For a clear way for me to capture this lesson, I call this Collective Learning.
It is very powerful.
Think of 14 humans learning a new task. Peeling an apple. Or, laying down track for a railroad. Or, programming a new and even more frustrating in-flight entertainment menu for Air Canada (who have the worst one known to mankind).
Every human will do it individually as well as they can – there will be the normal bell curve of competency. It is entirely possible, if there are incentives to do so, that the humans who are better in the group will try to teach others. There will be great improvement if the task is repetitive and does not require imagination/creativity/intrinsic intelligence. There might be a smaller improvement if the task is not repetitive and requires imagination/creativity/intrinsic intelligence.
In neither case will there be anything close to Collective Learning when it comes to humans.
Humans also do not posses this continuous closed loop: Do something. Check outcome (success or failure). Actively learn from either, improve self. Do something better the next time.
Collective Continuous Learning. An incredible advantage that I had simply not thought through deeply enough.
Here’s the second BFD.
Machine Learning is already changing lots of fields, the one I’m most excited about is what’s happening in healthcare. From the ability to speed up discovery of new medicines to the unbelievable speed with which Machine Learning techniques are becoming particularly adept at diagnosis (think blood reports, X-rays, cancers etc.). 
An example I love. 415 million diabetic patients worldwide are at risk of Diabetic Retinopathy (DR) – the fastest growing cause of blindness. If caught early, the disease is completely treatable. The problem? Medical specialists capable of detecting DR are rare in many parts of the world where diabetes is prevalent.
Using a dataset of 128,000 images Google’s  Accelerated Science Team trained a deep neural network to detect DR from retinal photographs. The results delivered by the algorithm (black curve) were slightly better than expert ophthalmologists (colored dots)…
Specifically the algorithm has a F-score of 0.95 and the median F-score of the eight expert ophthalmologists was 0.91.
As richer datasets become available for the neural network to learn from, as 3D imaging technology like Optical Coherence Tomography becomes available all over the world to provide more detailed view of the retina, just imagine how transformative the impact will be.
Literally millions upon millions of people at risk of blindness will have access to AI-Powered technology that can create a different outcome for their life  – and their families.
#omg
A recent incredible article on this topic is in my beloved New Yorker magazine: A.I. VERSUS M.D. You *should* read it. I’ll jump to a part of the article that altered my imagination of possibilities.
An algorithm created by Sebastian Thrun, Andre Esteva and Brett Kuprel can detect keratinocyte carcinoma (a type of skin cancer) by looking at images of the skin (acne, a rash, mole etc.). In June 2015 it got the right answer 72% of the time, two board-certified dermatologists got the right answer for the same images 66% of the time.
Since then, as they outlined in their report published in the prestigious journal Nature, the algorithm has gotten smarter across even more skin cancer types – and consistently performs better than dermatologists.
Most cancers are fatal because they are detected too late, just imagine the transformative impact of this algorithm sitting in the cloud easily accessible to all humanity via their five billion smartphones. This dream come true: low-cost universal access to vital diagnostic care.
Oh, and here’s a profoundly under-appreciated facet of all this. These health algorithms (including and beyond the one above), are incredible at corner cases, the rare long-tail anomalies. They don’t forget what they have seen once or “rarely.”
This is just a little bit of context for the key point.
A dermatologist in a full-time practice will see around 200,000 cases during her/his lifetime. With every case she sees, she’ll ideally add to her knowledge and grow her diagnostic skills.
Our very human problem is that every new dermatology resident starts almost from scratch. Some textbooks might be updated (while comfortably remaining a decade of more behind). Some new techniques – machines, analytical strategies – might be accessible to the resident. But, the depth and breadth of knowledge acquired by the dermatologist at the end of her career with 200k cases, is almost completely inaccessible to the new resident. Even if they do a residency at an hospital or with a old dermatologist, a newly minted dermatologist will only be a little better than when the old one left school.
Consider this instead: The algorithm above processed 130,000 cases in three months! And every day it will get smarter as it’ll have access to the latest (and more) data. Here though is the magical bit. Every single new algorithm we bring online will have total access to all knowledge from previous algorithms! It’s starting point will be, what I call, Complete Day One Knowledge.
As it gets more data to learn from, as it has access to more compute power, it will get smarter and build upon that complete knowledge. The next version of the algorithm will start with this new high mark.
There is nothing equivalent to Complete Day One Knowledge when it comes to humans.
Combine having Complete Day One Knowledge with Collective Continuous Learning (networked hardware or software all learning at the same time) and it should take you five seconds to realize that we are in a new time and place.
Whatever form AI takes, it will always have access to complete knowledge and through the network each instance will make all others smarter every single instance/moment of its existence.
Humans simply can’t compete.
That’s the BFD.
Stop. Think. If you disagree even slightly, scroll back up and read the post again.
It is imperative that you get this not because of what will happen in 10 years, but what is happening today to the job you have. If you still disagree, scroll down and post a comment, I would love to hear your perspective and engage in a conversation.
Bonus 1: There is an additional valuable lesson related to open-loop grasp selection and blindly executing it vs. incorporating continuous feedback (50% reduction in failure rates!). The two videos are worth watching to see this in action.
Bonus 2: While are on the subject of objects… Relational reason is central to human intelligence. Deepmind has had recent success in building a simple neural network module for relational reasoning. This progress is so very cool. Additionally, I was so very excited about the Visual Interaction Network they built to mimic a human’s ability to predict. (If you kick a ball against the wall, your brain predict what will happen when the ball hits the wall.) The article is well worth reading: A neural approach to relational reasoning. Success here holds fantastic possibilities.
Wait. So are we “doomed”?
It depends on what you mean by doomed but: Yes. No. Yes, totally.
Artificial Intelligence will hold a massive advantage over humans in the coming years.
In field after field due to Collective Continuous Learning and Complete Day One Knowledge (not to mention advances in deep learning techniques and hardware :)), AI will be better at frequent high-volume tasks.
Hence, the first yes.
Neuralink at the moment is a concept (implantable brain-computer interface). But many experts (like Ray Kurzweil) believe some type of connection between our human brain and “intelligence, data, compute power in the cloud” will be accessible to humans.
I humbly believe that when that happens, over the next few decades (think 2050), humans could get to parity with AI available at that time. We might even have an advantage for some time (if only because I can’t let go of the thought that our brains are special!).
Hence, the no.
As we head towards the second half of the current century, AI will regain the lead again – and keep it for good. I don’t have the competency to judge if that will be AGI or Superintellignece or some other variation. But, with all other computing factors changing at an exponential rate it is impossible that intelligence will not surpass the limitations of humans and human brains (including the one with a version of Neuralink).
Here’s just one data-point from Jurgen Schmidhuber: Neural networks we are using for Deep Learning at the moment have around a billion neural connections compared with around 100,000 billion in the human cortex. Computers are getting 10 times faster every 5 years, and unless that trend breaks, it will only take 25 years until we have a recurrent neural network comparable with the human brain. Just 25 years.
Hence, the yes totally.
I have a personal theory as to what happens to humans as we look out 150 – 200 years. It is not relevant to this post. But, if you are curious, please ask me next time you see me. (Or, sign up for my weekly newsletter: The Marketing < > Analytics Intersect)
AI: A conversation with a skeptic.
Surely some of you think, to put it politely, that I’m a little bit out there. Some of you’ve heard the “hype” before and are deeply skeptical (AI went through a two decade long tundra where it failed to live up to every promise, until say 2010 or so). Some of you were promised Programmatic was AI and all it did was serve crap more efficiently at scale!
I assure you, skepticism is warranted.
Mitch Joel is the Rock Star of Digital Marketing, brilliant on the topic of media, and a very sweet human being. Amongst his many platforms is a fantastic podcast called Six Pixels of Separation. Our 13th podcast together was on AI. Mitch played the role of the resident skeptic and I played the role of, well, the role you see me play here.
If you can think of a skeptical question on this topic, Mitch asked it. Give the podcast a listen…
(Play at Six Pixels of Separation)
As you’ll hear multiple times, a bunch of this is a matter of thinking differently about the worldview that we’ve brought with us thus far. I share as many examples and metaphors I could to assist you in a journey that requires you to think very differently.
If you are still skeptical about something, please express it via comments below. Within the bounds of my competency, I’ll do my best to provide related context.
Ok, ok, ok, but what about the now? (Professional)
While I look at the future with optimism (even 150 years out for humans), what I’m most excited about is what Machine Learning and Deep Learning can do for us today. There are so many things that are hard to do, opportunities we don’t even know exist, the ability to make work that sucks the life out of you easier, better, smarter, or gone.
In a recent edition of my newsletter, TMAI, I’d shared a story and a call to arms with specific recommendations of what to do now. I’ll share it with you all here with the hope that you’ll jump-start your use of Machine Learning today…
I lived in Saudi Arabia for almost three years. Working at DHL was a deeply formative professional experience. My profound love of exceptional customer service, and outrage at awful customer experiences, can be directly sourced to what I learned there.
Saudi Arabia is a country that saw massively fast modernization. In just a few years, the country went from camels to cars. (I only half-jokingly say that Saudis still ride their cars like camels – and it was scary!).
Think about it for a moment.
From camels to cars. No bicycles. No steam engines. None of the other in-betweens other parts of the world systematically went through to get to cars. They were riding camels, then they were riding cars. Consider all the implications.
We stand at just such a moment in time in the business world. You know just how immersed and obsessed I am with Artificial Intelligence and the implications on marketing and analytics. It truly is a camels to cars type moment in my humble opinion (it might even be a camels to rockets moment, but let me be conservative).
Yet, executives will often give me examples of things they are doing, and they feel satisfied that they are with it, they are doing AI. When I probe a bit, it becomes clear very quickly that all they are doing is making the camels they are riding go a little faster.
That all by itself is not a bad thing – they are certainly moving faster. The problem is they are completely missing the opportunity to get in the car (and their competitors are already in cars).
It is important to know the difference between the two – for the sake of job preservation and company survival.
Here are a handful of examples to help you truly deeply internalize the difference between these two critical strategies…
If you are moving from last-click attribution to experimenting with first-click or time-decay, this is trying to make your camel go faster. Using ML-Powered Data-Driven Attribution and connecting it with your AdWords account so that action can be taking based on DDA recommendations automatically, you are riding a car.
(More on this: Digital Attribution's Ladder of Awesomeness)
If you are moving to experimenting with every button and dial you can touch in AdWords so that you can understand how everything works and you can prove increase in conversions while narrowly focusing on a few keywords, you are making your camel go faster. Switching to ML-powered Smart Targeting, Smart Creative and Smart Bidding with company Profit as the success criteria, for every relevant keyword identified automatically by the algorithm, you are riding a car.
Staffing up your call center to wait for calls from potential customers is making your camel go faster. Creating a neural-network that analyzes all publicly available data of companies to identify which ones are going to need to raise debt, and proactively calling them to pitch your company's wonderful debt-financing services is riding a car.
Hand picking sites to show your display ads via a x by x spreadsheet that is lovingly massaged and now has new font and one more column on Viewability, is making your camel go faster. Leveraging Machine Learning to algorithmically figure out where your ad should show by analyzing over 5,000 signals in real time for Every Single Human based on human-level understanding (die cookies die!), is riding a fast car.
(To see a delightful rant on the corrosive outcomes from a Viewability obsession, and what you might be sweeping under the carpet, see TMAI #64 with the story from P&G.)
Asking your Analysts to stop puking data, sorry I mean automate reporting, and send insights by merging various data sets is making the camel go faster. Asking your Analysts to just send you just the Actions and the Business Impact from those Actions is riding a car. Asking them to shift to using ML-powered products like Analytics Intelligence in GA to identify the unknown unkonwns and connecting that to automated actions is riding a rocket.
If you are explicitly programming your chatbot with 100 different use cases and fixed paths to follow for each use case to improve customer service, that is making the camel go faster. If you take the datasets in your company around your products, problems, solutions, past successful services, your competitors products, details around your users, etc. etc. and feed it to a deep learning algorithm that can learn without explicit programming how to solve your customer's service issues, you are riding a car.
I, literally, have 25 more examples… But, you catch my drift.
I do not for one moment believe that this will be easy, or that you'll get a welcome reception when you present the answer. But, one of two extremely positive outcomes will happen:
1. You'll get permission from your management team to stop wasting time with getting the camel to go faster, and they'll empower you to do something truly worth doing for your company. Or…
2. You'll realize that this company is going to suck the life out of your career, and you'll quietly look for a new place to work where your life will be filled with meaning and material impact.
Win-Win.
Hence, be brutally honest. Audit your current cluster of priorities against the bleeding edge of possible. Then answer this question: Are you trying to make your camel go faster, or jumping on to a car?
While Machine Learning has not solved world hunger yet, and AGI is still years away, there are business-altering solutions in the market today waiting for you to use them to create a sustainable competitive advantage.
Ok, ok, ok, but what about the now? (Personal)
If this post has not caused you to freak-out a tiny bit about your professional path, then I would have failed completely. After all, how can the huge amount of change mentioned above be happening, and your job/career not be profoundly impacted?
You and I have a small handful of years when we can create a personal pivot through an active investment of our time, energy and re-thinking. If we miss this small window of opportunity, I feel that the choice will be made for us.
This blog is read by a diverse set of people in a diverse set of roles. It would be difficult to be personal in advice/possibilities for each individual.
Instead, here’s a slide I use to share a collection of distinct thought during my speaking engagements on this topic…
In orange is a summary of what “Machines” and humans will be optimally suited for in the near-future. (Note the for now.) Frequent high-volume tasks vs. tackling novel situations.
In green, I’m quoting Carlos Espinal. I loved how simply and beautifully he framed what I imagine when I say tackle novel situations.
Over the last 24 months, I’ve made an whole collection of conscious choices to move my professional competencies to the right of the blue line. That should give me a decade plus, maybe more if Ray is right about Cloud Accessible Intelligence. Beyond that, everything’s uncertain. :)
Summary.
I hope you noticed I ended the above paragraph with a smiley. I’m inspired by the innovation happening all around us, and how far and wide it is being applied. I am genuinely excited about the opportunities in front of us, and the problems we are going to solve for us as individuals, for our businesses, for our fellow humans and for this precious planet.
In my areas of competence, marketing, analytics, service and sales, I can say with some experience that change is already here, and much bigger change is in front of us. (I share with Mitch above how long I think Analysts, as they are today, will be around.) I hope I’ve convinced you to take advantage of it for your personal and professional glory.
(All this also has a huge implication on our children. If you have kids, or play an influencing role in the life of a child, I’d shared my thoughts here: Artificial Intelligence | Future | Kids)
The times they are a changin'.
Carpe diem!
As always, it is your turn now.
Were Collective Learning and Complete Day One Knowledge concepts you’d already considered in your analysis of AI? Are there other concepts you’ve identified? Do you think we are doomed? Is your company taking advantage of Deep Neural Networks for marketing or analytics or to draw new value from your core back-office platforms? What steps have you taken in the last year to change the trajectory of your career?
Please share your insights, action-plans, critique, and outlandish predictions for the future of humanity, :), via comments below.
Thank you.
The Artificial Intelligence Opportunity: A Camel to Cars Moment is a post from: Occam's Razor by Avinash Kaushik
from SEO Tips https://www.kaushik.net/avinash/artificial-intelligence-opportunity-camel-to-cars-moment/
0 notes
crystalsandbubbletea · 2 years ago
Text
Started working on the design for my A.I OC Mint-
Don't know how long it will take, but I do know I won't be done with it today *Sob*
0 notes