Tumgik
#now we're talking about the difference between mass and weight
krazycat666 · 2 years
Text
The rest of my math class : * learning trigonometry*
Me who has stopped absorbing any kind of math past chapter one: *trying to figure out if my teacher actually writes her S's upsidedown or if the ungodly amount of caffeine I consume has actually gotten to me*
0 notes
casspurrjoybell-31 · 10 months
Text
The Consort - Chapter 13 - Part 1
Tumblr media
*Warning Adult Content*
Finn
As the weeks pass, the world around us transforms into a macabre whirlwind of chaos, fear and war.
Up until last Thursday, Leo would turn on the news every day so we knew what was happening.
But now the TV announcements have stopped.
Mass communication has stopped.
The last thing we heard from our leader was that all contact between vampires and their consorts was hereby banned.
Above all else, his reinforcing words never wavered, stay hidden and keep safe.
We are at war.
"I'm heading to the store," Fiona announces, shoving away from the kitchen table.
Leo hunches beside the water heater, tinkering with the levers at the top.
A tool belt hangs lazily on his narrow hips.
He doesn't bother glancing up at her before saying no.
"We're running out of food," she argues.
"We'll make do."
"We have been 'making do,' Leo. It's not enough."
Leo sighs and stands, wiping off his hands on the length of his frayed blue jeans.
If it was just the three of us here, we'd have enough food to last another month.
Over the past week, however, seven additional people have shown up at Leo's doorstep... all of them loyal followers to his radical idea of a revolution.
I recognize some of them from class but the rest are all strangers that I no longer have the strength to make small-talk with.
"It's too dangerous," Tony chimes in.
"The last broadcast told us that the vampires are being ruthless. They're dressing up like humans, blending in until they get a chance to advance."
"I'd rather they just attack us," Midge comments from the couch.
She flips a page of her magazine and stares off into the distance.
"Just get it over with, you know? At least then we wouldn't have to be afraid all the time."
All of us glance around at one another.
For the first few days she was here, Midge reminded me of myself.
She's quiet and pensive, not usually one to ruffle feathers.
I even tried approaching her one night after dinner but five minutes into the conversation, I quickly realized we are more different than night and day.
Where my personality is filled with understanding and the netting of an open mind, hers is closed off and wired shut around one thought... kill all the vampires.
"Well while all of you sit around and pilfer through your magazines or debate about your deep-rooted fear of the undead, I'm going to check the store for food. Anyone want to come with me?"
I immediately jump to my feet.
"I'm coming with you."
"No, you're not," Leo says, shutting me down.
Normally Fiona agrees with him and writes me off.
My spirits sink, expecting this time to be no different.
But to my surprise, my old friend meets my gaze from across the room, her eyes dancing over me thoughtfully before smiling softly.
The sight makes me ache.
I haven't seen Fiona smile since the day she found out Kelly changed.
Truthfully, I haven't even really talked to her since that day, either.
Seeing her smile, just briefly, reminds me all the warm memories I shared with her and Kelly.
So much has changed since then and yet in one, slight smile, hope manages to swell in my chest.
"He can come with me," Fiona offers quietly.
Leo walks over to her, his eyes squinting with frustration.
"No, he can't. It's too dangerous."
Fiona rolls her shoulders, unfazed by the foot and a half height difference between the two of them.
"It is dangerous, Leo. You're right. But Finn stays here of his own accord. Our freedom is running thin as it is. If he wants to come with me, that's his choice. Not yours."
Midge gasps from the couch.
There aren't many people who stand up to Leo.
We're staying at his house, after all and he could kick us out at any time.
Not only that but many of us still view him as our Professor, a role where his opinion holds more weight than ours.
Leo purses his lips, staring down at Fiona with a look akin to betrayal.
Then he turns to me and his eyes relax.
"You really want to go?" he asks me.
I nod.
His nostrils flare and he looks down at his tool belt.
This will be the first time I've left his side since the day he brought me here after what happened at the University.
It's not that I'm ungrateful for all that Leo's done for us because I appreciate it more than he realizes.
Really, I do.
I just can't keep hiding in this house without ever getting a chance to really breathe.
"Fine," he grits out.
"Mark, you're going with them."
Mark, an old family friend of Leo's, agrees without protest.
He gets to his feet and starts heading towards the garage door.
Fiona throws me a triumphant wink, one that thankfully goes unnoticed by Leo.
She turns towards the car but Leo catches her by the elbow and yanks her back.
"You keep him safe," he warns.
His tone is minimal but terrifying.
"And if you can't? Don't bother coming back."
Fiona's stumbles back a step.
Neither of us has seen Leo act like this before.
Kelly used to tell me that in tough times everyone's ugliest colors start to show and that the strongest people are the ones who find the courage to overpower it.
"Let's go, Finn," Fiona says.
Her shoulders become rigid as she makes her way to the car.
I follow behind her but hesitate as I pass Leo.
Will he have any warnings for me too?
But Leo doesn't say anything.
Instead his arms wrap around me, pulling me against him so tightly that I can smell the sandalwood soap he used this morning in the shower.
When he releases me, I notice his cheeks are red.
"Be safe," he murmurs.
"I will."
Leo steps aside to let me pass and minutes later, I am in the back seat of Mark's SUV while he pulls out of Leo's driveway.
Once we're out of the subdivision, he cracks the window open just a hair.
The cool air from outside rushes in and I breathe it in longingly.
There isn't a single car on the road but ours.
It's unsettling, the type of thing you'd expect to see in an apocalyptic movie.
I try to shut down the fear as best I can.
My eyes lap up the view of the outdoors, appreciating every color and house that we pass.
"I'm thinking we stop at that little grocery store on Maple," Fiona says from the passenger seat.
Mark's dark eyes dance along the road ahead of us.
"Fine. Just tell me where to go."
Fiona navigates the three of us into town, avoiding the major roads when she can to play it safe.
Mark never questions her directions,but every once in a while I notice his knuckles tightening over the wheel.
A single car passes us just before we reach the grocery store.
There's two women in the car, both of them looking about as terrified as I feel.
Even though I promised myself I wouldn't be afraid, I'm suddenly second-guessing my willingness to come on this little grocery trip.
"We're here," Fiona announces.
Mark pulls into the deserted parking lot.
The grocery store is dark and the front glass doors have been shattered.
Apparently we aren't the first people to come up with this idea.
Fiona grabs a handful of plastic bags and disperses them to both of us.
"We're looking for canned goods, especially," she says.
"But really, anything that is edible and has some sort of nutritional value will be good."
Mark takes the bags from her hands and hops out of the car.
Fiona watches him, her eyes brightening with every step he takes.
I expect her to get out of the car but instead she sits back in her seat and glances at me from the rear view mirror.
"Finn," she says quietly.
"I'm going to ask you this once and only once."
Mark gets closer to the door and turns to make sure we're following.
When he sees we're still in the car, he frowns with confusion.
"When you kissed your vampire, did he kiss you back?"
1 note · View note
Note
lmao I used to have a binge eating disorder and ate like 7k calories a day. Crazily enough when I started eating a normal amount of food for my gender/height/activity level I lost weight and my sleep apnea went away, I can now walk up the stairs in my house without excruciating pain, and I have 500x the energy that I used to and my brain fog is 99% gone. But I'm sure being 190lbs overweight and eating until I felt like I was going to vomit was totally healthy and my pain was just a manifestation of my internalized fatphobia or w/e lmao
People must have somehow magically evolved to be morbidly obese in only a few decades, it can't possibly have anything to do with insane portion sizes, mass production and availability of unhealthy food, a food pyramid sponsored by companies wanting to sell you their trash and food now being full of sugar, corn syrup, artificial colors and other garbage. We're also so magical that being fat is totally fine for us unlike the thousands of other animal species in existance.
Keto and intermittent fasting literally saved my life. I was able to go down from 10 DIFFERENT MEDICATIONS to only needing 4 of them. I've literally talked to hundreds of people with the exact same story.
“I traded my BAD eating disorder for a GOOD eating disorder” is not the flex you think it is anon.
By your own admission your issue was having an eating disorder, not being fat, if you think every fat person also has an eating disorder you are projecting your own trauma and insecurities onto other people because you apparently think your experiences are universal. Most fat people do not eat until they puke actually.
The reason obesity is dangerous for a lot of animals is because, get this, animal physiology and adaptations vary greatly between species. Do you think you have the same needs as a flea? Bears, seals, whales, hippos, boars, and several other species have to carry a lot of fat to survive. Humans are bipedal and have fat stores that build up away from organs and joints. This keeps a lot of the strain off the back and prevents many of the issues you will see in quadrupeds and birds from emerging.
Ease and rate of storing fat in humans is largely due to genetics. People who were born by a starving mother naturally hold fat more easily because this is an adaptation to prevent starvation in famines. Increased fat stores is also hereditary so people who had ancestors that were starving will carry weight more often than those whose ancestors had an easy access of food for several generations. Because this was something humans adapted to over the course of millennia as nomadic groups who had to deal with inconsistent food availability due to different climates, flooding cycles, droughts, etc, humans did in fact evolve to have a fluid metabolism and hold stores of fat just fine. If you think fat people are a new invention, I have insane news for you about noble and royal families in Europe for hundreds of years. Being fat was a status symbol to show off how much money you had and that you had no need to toil in fields. This is documented very well.
Keto and [starving yourself] is not healthy. If your entire argument is that being some arbitrary level of “overweight” is unhealthy, you should consider not promoting a diet that causes you to have calcium absorption complications, chronic arterial diseases, kidney stones, and other serious issues long term. Anorexia is not a health trick either, once again, that is an eating disorder.
184 notes · View notes
thegeekcloud · 2 years
Text
Welcome to another one of my lectures. Today we're gonna talk about
Galaxy Collisions
The dance of the universe
Featuring L.I.S.A.
Tumblr media
As some of you might know (or might not know) pur galaxy is in a course of collision with the galaxy of Andromeda. Of course this colission is gonna happen in about 4 billion years and we're probably not gonna be alive as a species, let alone as individuals. But even if we were, the sun would not collide with anything as proven by computational simulations and observing other collisioms happening right now far far away in our universe. Like, a chance of 1 in 100000000000. You have way more chances winning the lottery when each person on the planet has bought a ticket.
So
What happens?
Let's imagine 2 galaxies. Each one is caught in the gravitational pull of the other, much like everything else in the world. As they draw closer, that pull becomes stronger.
The first thing to part each galaxy is the interstellar gas and dust. This acts like a fluid, particles moving closely together, and is very light in weight so it is easily separated by the main disk of the galaxy. This phenomenon does not require the galaxies to collide, but merely to pass close enough. It results in the creation of "tails" of gas and dust, coming out of the galaxy and stretching out to where the other galaxy passed by.
Here we have an example. This pair of interracting galaxies is called "The Mice" for obvious reasons
Tumblr media
Preety huh?
Now. As i said the two galaxies can collide. According to the difference in mass between them we have two cases:
Galaxies significantly smaller are absorbed by the larger one. This is called "galactic cannibalism" (💀). We're currently doing this to the magellanic clouds (if you live in the southern hemisphere you have probably seen them in the sky). Thìs is also probably what andromeda is gonna do to us (Andromeda is visible in the northern hemisphere and about the only galaxy you can really see with the naked eye)
If they're about the same size they crash. Boom:
Tumblr media
In any case, interstellar gas and dust are exchanged, the two galaxies become one, the tidal waves compress matter near the center of the galaxy and star formation is triggered. There is a whole category of galaxies called "Starburst galaxies" which are very bright in infrared light (they appear normal in the sky through your optical telescope). These galaxies we predict are the result of such colissions.
Explosive star formation however means that the "fuel" of the galaxy is spend very quickly, and so the galaxy "dies" (meaning no new stars are formed) pretty quickly (meaning a few hundred million years).
Tumblr media
MOREOEVER
mg favourite part cause that's what I'm currently specializing in my studies
After such a collisions we end up with two centers of galaxies (meaning their supermassive black holes - enter song by MUSE-) in a common disk. These two INCREDIBLY HEAVY objects orbit each other and affect the orbits of nearby stars. Those stars are so light compared to the black holes tgat are actually SHOT OUT of the galaxy. If the combined mass of lost stars is comparable in terms of size with the mass of the black holes then those two begin to lose torque and end up getting closer. After getting close enough they start producing gravitational waves, causing them to lose even more energy and bringing them closer, until in the end they become one.
Tumblr media
Now,
We can't yet detect those. The frequency they produce is too low for our ground detectors and so it is burried under noise and the limitations of the instrument itself.
BUT
in the 2030s a new detector will launch. And i mean that quite litterally, as they will launch it in SPACE. The Laser Interferometer Space Antenna (LISA for short. Nice lady) will be comprised of 3 ships positioned equidistantly in the angles of a triangle and will follow a specific path, following the earth around the sun but far enough from the planet. The three ships will be like 50km apart. A laser beam connects them all.
An interferometer uses a phenomenon of light called symbolometry. Depending on the wave's phase, we either see light or darkness in our screens. This is a ground detector:
Tumblr media
Ground detectors have "arms" of length of 5km and detect frequencies like 10Hz to 1000Hz or smth like that, i don't actually remember the number. LISA tho, will bave much bigger arms (50km long) covering more space, interracting with a wave more fully and more easily. So, LISA aims to detect waves between 0.1 mHz and 1Hz. It will also be away from other earthly noise, like earthquakes.
One time they almost mistook a signal for a truck passing by and causing the mirrors reflecting the light back to tremble. Yeah. Don't worry, they noticed.
But the future is looking bright. Cause galaxy collisions are a window to the past (since the rate of stat formation is similar to the one we had in the first steps of the universe) and to the future (as i said, it'll happen to us too)
More science
24 notes · View notes
transcharliekelly · 3 years
Text
the spreadsheet i keep on mac's wardrobe has been fully updated for s15 ! here are some thoughts about his costume this season bc. idk when you do something like this you start thinking about it a lot
- SO many button downs....
Tumblr media
out of 14 shirts this season (vaguely on the higher side but still within average) mac wears eleven button-downs- more or less 78% of his shirts. that's the highest number since their introduction in s7, which is their second most popular season with eight of sixteen/50%. i DO have thoughts about this but they're related to my next point so ill get to them in a second
- another point about these button-downs is that there's a lot of colour. the button-downs have never been muted, perse, but they're particularly bright this season, especially using a lot more pink/purple tones & just generally more vibrant shades.
now. my thoughts.
look back a few lines to what I said about s7- the button-downs are introduced in 07.01 (the a-plot of which is about frank loving the woman he loves no matter what """greater society""" thinks, even if the relationship is unconventional, just by the way) when mac and dennis go pick some up after mac's gotten fat between seasons. mac continues to wear them regularly, dennis, done with the desire to break through from the rigid, emotionally + physically straining/painful labour he puts himself through to adhere to what he believes is expected of him that he expresses during the episode, does not.
in season eight, mac has lost the weight he had gained for season seven. he clearly expresses, primarily in 08.05 the gang gets analyzed though also in throwaway moments throughout the season, that he misses his weight (mass, as he calls it). he's lost it because of (likely dangerous) diet pills that he's tricked into taking by dennis. mac looks back on his fatness as a time of freedom and pride for him, clearly viewing the weight loss as (to a degree) having made him a shell of his former self.
two of mac's ten shirts in season eight are button-downs, putting the rate there at exactly 20%, down 30%. from s8-s14, mac wears five button-downs out of ninety-two shirts, for a rate of roughly 0.05%.
I think that if we're talking about mac's self-expression there's a pretty clear line in the sand to be drawn- 08.01 through 12.06, and 12.07 through 14.10. pre-coming out, and afterward. pre-coming out has a rate of three of sixty-six/0.054%, and afterward has two of twenty-six/0.095%. keep in mind that these numbers should be taken with a grain of salt as they only account for shirts worn for the first time, but a) the number of rewears starts to go down a lot in the later seasons so they're not far off, and b) I believe that they do still accurately represent that the quantity of button-downs is low.
so where's mac at during s8-14?
well, pre-coming out it's pretty obvious what the issue is. he's pre-coming out. coming out is something that isn't inherently necessary for happiness as a queer person, of course, but mac is really really actively repressing who he is pre-hohc, and he's fucking miserable about it.
that middle area of the show, around s7/8/9, is where mac starts to be really actively queer-coded. there's this vague gay aura that's always been present in the show (and still is), but it starts to really focus in with mac around this time, and as a result we get more and more of him being this ball of repression wound so tight that he's about to snap. so there's our answer- despite the freedom he feels in s7 when he lets go of what he's been holding himself to (in a sense), he loses that in s8, and the freedom of the button-downs goes with it.
but from 12.07 onwards, it's a bit of a different beast. mac doesn't know how to be gay.
so much of mac's character is about how he's struggling to fit into his identity. I think there's a bit of a disadvantage here because the handling of mac's character (particularly his identity) in s13/14 is just remarkably horrible, but there is a really interesting thing to be pulled from the wreckage there where mac spent what. forty-odd years ? denying who he is and now doesn't know what to do with himself. so he's free from his denial, yes, but he's not free.
so what's changed with s15 ?
I think that he's finally settling into being gay.
his whole thing in s15 is about finding which part of his identity is most "significant", right? which is it's own identity problem, yeah, but he's almost shockingly comfortable with the fact that being gay is part of him. he tosses it around brazenly in conversation, he openly tells a priest (A CATHOLIC PRIEST IN HIS HOME COUNTRY !!) about his various sexual experiences with men & his attraction to the hot priest, he very openly expresses excitement about "a priest like me" when he thinks the priest he's following is gay.
mac hasn't exactly been shy about his sexuality up until this point, but there's always been this edge to it that.... ok like i said i despise what was done with his character in s13/14 and im loathe to give rcg kudos for it, but i think there's a totally reasonable reading of that where he's weaponizing his outward expression/discussion of his sexuality so that he doesn't have to think about the unresolved pain that's still there. mfhip is a significant step re: catharsis/reconciling his hurt, but there's still that last stretch before he gets to where he is in s15.
and so that's what the shirts are. he's back to the freedom of s7- not only are the button-downs in abundance, but they're brighter than ever before. he feels like himself again, except this time it's on the inside.
27 notes · View notes
interstellarre · 3 years
Text
Delve In The Depths. Chapter II.
Tumblr media Tumblr media
Word Count. 1.5k
a/n. Just a quick btw, Meno gave Xiao the nickname "Emerald duck" because emerald ducks have greenish teal stripes on their heads and Xiao has teal undertones in his hair.
Trigger Warnings. Mentions of death and violence
Series Masterlist
Tumblr media
Chapter II.
Again and again these waves crash over Xiao's subconscious. Riptides of lost human dreams, the tsunamis of guilt, and the eons of pain build each other up, growing larger as they drown him in endless suffering. Waves of black vapor cloud his person. He clutches his mask
He can hear their screams now as he writhes on the top floor of Wangshu Inn in agony, barely supporting the weight of his body with his arms leaning on the balcony rails.
"Xiao, Xiao!" he turns his head to see Verr Goldet franticly searching for him.
"There's someone downstairs, the-they, Verr Goldet stutters on her words, waving her arms around unlike her usual composed self.
Xiao doesn't wait for to finish, he grabs his pole arm by reflex prepared to strike the threat down.
Instead he's met with a person grappling with pain on the floor.
Tumblr media
"Why slime condensate exactly?"
"Hm?" Xiangling gives you a genuinely confused look despite it not exactly being the social norm to add slime liquids to a meal. She was climbing up a sandbearer tree. The striped squirrels on the ground scatter upon her arrival.
"What gave you the idea to add slime into your dishes?" you clarify, trying not to come off as rude. Tossing the wicker basket between your hands as a form of entertainment while your culinary friend ducked her head underneath a branch.
The trees ruffle and flocks of crimson flinches and golden flinches fly off to the sky as Xiangling forages around in the tree branches for bird eggs.
"What gave you the idea that not everything is edible?" she playfully teases, now placing bird eggs by sets of two in the basket she previously gave you in Wanmin Restaurant.
You giggle, covering your hands with your mouth. She motions for you to put the basket down and come over while she grabs you by the shoulders ("Don't you dare-") and hops down. Unfortunately, you aren't heavy enough to support her body weight when she jumps down with her full force.
"Ugh!" you groan as you both tumble down to the floor. You raise a hand to your head and cover your forehead. "Was that really necessary?" you sigh, already far too used to her antics. She snickers.
As you regain your footing, you ask, "How far along are we exactly? My mother will have an aneurysm if we step foot in Moon City.*" Xiangling had already run off, and with the basket no doubt.
You look to your right and find her by the lake counting hydro slimes behind a crack between a few slabs of stone. You crouch down besides her. Her charcoal hair brushes against your mulberry silk skirt.
"1,2,3,4." Yes! this is definitely enough for my new dish!" she pumps her fist in the air.
You don't remember there being a lake to the far right in the places your mother told you to stick to.
"Let me guess," you strike a thinking pose, you want me to set up a new shop here for your new culinary competition?" you sarcastically muse.
She rolls her eyes. "No, silly I-," she stops at your amused expression. "Ah- well go on than."
You reach your arm to summon your now unsheathed dagger attached to the leather belt on your waist, ignoring the long bow and arrows attached on your back and rather choosing a melee weapon,
Standing up from your hiding spot, the group hydro slime flock, well bounce towards you.
The air turns frosty and Xiangling's teeth chatter while she rubs her arms in hopes of warming up. "Don't turn me into a chef popsicle before I get the slime condensate [Name]!"
As you kneel down to slam the stiletto dagger into sand, sharp edged flower patterns appear on the ground. The slimes teeter back at the sound chill between their mass before large icicles spring up, piercing their bodies and turning them into goo.
"Woo!" Xiangling jumps above the rock pile and excitedly cheers. Pumping her arms up. "That's my girl!"
"It was nothing really. What was it you needed next again? Of course after you've collected the slime condensate of course." you stop talking as Xiangling sweeps the slightly frozen slime fluids off the crystals you've created into a glass bottle.
"Well talking about other ingredients, I actually wanted to try something." she mentions with a certain twinkle in her eyes.
"You have my attention." You wave your hand at her to go on.
"You know that cooking competition? The one I had in the Mondstadt with the chef named Brooke?"
"I don't recall you telling me that, can you specify?" racking your brain for memories of Xiangling's rantings about food. You suddenly feel drops of sweat on your back despite not being lukewarm at the very best. It must just be from the excitement from fighting the slimes, you think pushing away your other thoughts on the matter.
"Well anyways, we found this extinct species of boar with the help of the traveler, I believe they're called the honorary knight now?" she taps her chin. "That's besides the point but, anyways, it made me think of the different varieties of possible meat options I could use with different monsters. Can you go with me north of Jueyun Karst with me to find a Stonehide Lawachurl?" She claps her hands together into a begging motion. "Please, Please?"
"Mhm, I'm not sure how fast we can make it there? You didn't hear my question before when I was asking where we were before. I'm planning on packing my bags early when I go home overmorrow." you say counting the possible time it would take you to pack all your belongings. Black spots appear in your vision. You open your mouth to speak, but nothing comes out.
"Hmm, I'd say if we're lucky, a few hours? It's lucky that it's still the early morning huh?" Xiangling turned her attention to you from the mushrooms she was picking underneath the trees.
"[Name]?"
She looks over to see you on your knees, black substance withering out of your body. Sweat drips down your forehead.
She frantically shakes you, but your vision has gone black.
"[Name]!"
Tumblr media
The blood on Bosacius' arm dripped to the ground creating a thin string trailing only to be diluted by the pouring rain water behind Bosacius and a certain teal haired adeptus. Bosacius gripped his injured arm with his other.
"You need to treat that wound," Xiao said, glaring at his fellow adeptus' wound. He could see the majority of Bosacius bone creeping out of his flesh. A familiar sight.
"Rest assured, I've been in worse state. I just never expect it to hurt as much as it always does," grimaced Bosacius through his smiling expression. The water soaked through his garments and drenched his hair.
"You sound like one of those mortals, trying to fight through their deathly injuries only not to see the next day," replied Xiao looking forward to their destination of Jueyun Karst. He could see the towering peaks getting larger and larger as they move on despite the misty atmosphere.
"We're all too mortal for our liking these days." said Bosacius, his expression unreadable.
The sound of steps softly crushing the blades of grass underneath them and thunder rumbling filled the air while their owners remained silent.
"Have you told Rex Lapis about the constant pain you've been experiencing?" said Xiao, breaking the silence.
Bosacius bit his bottom lip while his working hands, well, what was left of them tensed up. "No, I didn't see the need to bother him. I'm sure he has other pressing matters to attend to now, especially with the incline in aggression from monsters around Liyue Harbor recently. It's strange," The older man looked up to the sky, while Xiao had a distracted look on his face from thinking about the increased monster attacks. "I have yet to figure out the cause behind it."
"I believe Cloud Retainer and Mountain Shaper are free this evening, I'll ask them for their input on the situation later."
They had arrived at Jueyun Karst, the floating island in the middle of the adepti abode was lit up, symbolizing the availability of Cloud Retainer.
"I'd imagine we don't have the need to place an offering in the middle of the lake huh?" Bosacius winks at Xiao. Xiao looks down at the lake, full of ripple currently from the cloudburst. The empty bowl in the middle overflowed with liquid.
Bosacius gave a forced smile at his correct prediction of their fellow adepti's availability. "Well, I suppose it's best for me to head off and find Indarias to heal my wounds."
"That would be for the best." confirmed Xiao
"Thank you for accompanying me for this trip."
Xiao turned his back and Bosacius was gone
"Hey! Emerald duck!"
Xiao swore he heard the inter layers of hell again as he pinched the bridge of his nose
"Oh archons," he cursed under his breath. Menogias tumbled towards him, no grace or posture in her current childlike state.
Tumblr media
*Moon City refers to Mondstadt as Mondstadt translates to Moon City in German.
a/n. Incase anyone was wondering the reader's constellation is "The Maiden" or "Virgo". I'm planning on making a character sheet for the reader soon, so watch out for that!
38 notes · View notes
xpao-bearx · 4 years
Text
《Original post here》
Part 2 HERE
SUMMARY: [Supernatural TWD AU] In which Negan is a kinky incubus, Rick Grimes is your secret guardian angel, and Daryl Dixon is a gruff monster/demon hunter. Three drastically different men who can only agree on one thing: making you theirs.
PAIRINGS: Reader x Negan, Reader x Rick Grimes, Reader x Daryl Dixon (Polyamorous Ships)
RATING: Mature/18+/Romance & Smut. Please be prepared and do NOT report.
NOTE: This is actually my first time ever writing an xReader story series as well as writing on Tumblr (I usually only write on Wattpad). As such, it probs won't be perfect though I would SERIOUSLY appreciate your *respectful* feedback and support!
I understand writing xReader content can get a lil tricky, so please just keep in mind that not everything Y/N says or does would be something that you'd do IRL or even approve of. Also, sometimes I may not help but put a teeny bit of myself in Y/N...
Lastly, I recently got back into the TWD fandom after a looong ass time and I'm taking a while re-watching the whole show. So I apologize in advance if my portrayal of any of the characters are rusty or I may not remember too much of the events from the show, but I promise to do my very best and hope y'all enjoy~!! \(^o^)/
DEDICATED TO: The wonderful @blccdyknuckles and @negans-attagirl 💖
"Heavenly Sins"
Part 1
Tumblr media
The sounds of laughter and easygoing chatter filled your ears as you walked closer to the church, a light breeze blowing through your F/C floral dress and the sun blinding your eyes. It was Sunday, most residents of the small town of Alexandria having gathered for mass.
It was a day like any other; peaceful and happy, children giggling and chasing each other around as their parents socialized outside before church could start.
Your heels clacking rhythmically on the pavement, you were just about to enter the building before a familiar voice called out.
"Y/N!"
Spinning, a huge smile instantly reached your ears as you saw none other than Carl Grimes waving enthusiastically at you as he jumped out of a car. From the driver's seat, his father soon followed as he stepped out.
Rick Grimes--dedicated sheriff of this fine town. His usual uniform forgone, instead replaced with a casual navy coloured suit. His baby blues met your E/C, flashing you a bright smile of his own that rivalled the sun itself.
Carl was running towards you now, and once in front he gave you a big hug.
"Settle down, cowboy! It's as if you haven't seen me in forever." You chuckled, ruffling Carl's hair affectionately.
"That's 'cause it did feel like forever." Carl pouted, eventually letting go as he looked up at you.
Before you can reply, Rick patted Carl's head and greeted you. "Hey, Y/N. How are things?" He asked in that endearing Southern accent of his.
"Just fine." You nodded, grinning before you couldn't help but let your gaze wander around a bit. "No Judith?"
It was then that Rick's smile faltered, but just barely. You nearly didn't catch it. "No. She's with her mom."
Rick was divorced from his ex-wife, Lori, after he discovered her cheating on him with his also now ex-bestfriend Shane Walsh. After the divorce, Shane and Lori quickly moved to the neighbouring community of Woodbury together and agreed on joint custody of the kids.
It really made your blood boil; you've interacted with Lori only a few times before so you didn't really have much of an opinion on her...that is, until, you learned what had happened between her and Rick. You knew it wasn't any of your business, but you cared about Rick a lot and he sure as hell didn't deserve to get cheated on.
"Oh." Was all you could say, quite stupidly. Your cheeks reddened, mentally slapping yourself before clearing your throat. "Will I see her in the daycare tomorrow, though?" You were a daycare teacher and even though you loved all of the kids, Judith was your favourite. She was simply such a sweetheart.
Rick nodded, his smile softening. "You got it."
You couldn't continue the conversation as the bells rang, making you jump out of your skin. Carl, noticing this, laughed which made you playfully roll your eyes before slinging an arm around him as all of you went inside.
♡♡♡
You took your place near the back of the church with Carl and Rick. Once everyone was settled and done singing, the service began and Father Gabriel stood on top of the podium. A few minutes into his sermon, the interruption of a motorcycle revving loudly outside sliced through the air. Gabriel flinched in surprise, and it was obvious he was desperately trying to keep his cool. Finally, when it was silent again, you found yourself biting back a smile knowing all too well who had caused the ruckus.
It seems Rick knew, too, judging from how his jaw clenched and his hands turned into tight fists.
The doors were thrown open, making Gabriel flinch once more and some of the congregation turning in the pews to look. But poor Gabriel quickly fumbled with his Bible, raising his voice just a tad to regain their attention.
There was a low whistle accompanying the approaching footsteps, but the congregation did their damn hardest to ignore the latest visitor.
"Damn... I assumed the church would be a lot more welcoming than this." A husky voice whispered, and you at last couldn't hold back as a smile broke through.
"Negan." You whispered back, turning slightly in your seat to see he has taken the spot behind you. His leather clad arms lackadaisically resting on your chair, the musky scent of his cologne invading your senses oh so wonderfully. "Fancy seeing you here."
"What? Is it really that surprising, darlin'?" He grinned, presenting a row of perfectly straight white teeth. "I go to church."
"Not all the time." You pointed out.
"Ah..." He chuckled softly, hazel eyes twinkling. "That's 'cause Father Creepy McGee over there is just that. Creepy. As. Shit."
You bit the inside of your cheeks, suppressing your laughter. True, Gabriel did have his moments, but he wasn't that bad. That didn't change the fact that Negan knew exactly how to tickle your funny bone, though.
He was new to Alexandria. It was a lovely town, but since it was relatively small not a lot of people want to move here not unless it was families looking for their children to grow up in a safe environment. Which was why it was quite a shock to find out that a single man like Negan chose this destination, and even more so when he took everyone aback with his infamous pottymouth and rather inappropriate charisma.
He had moved just a couple of houses down from yours, and you made it your mission to befriend him. Right from the get-go, he had piqued your interest and curiousity. He was different from everyone else--even possessing an air of mystery about him--and that definitely intrigued you. And also, perhaps you were just too nice and didn't want him to feel outcasted. Although, that didn't seem like an issue to him at all.
"Want one?" You were brought back to reality when you saw Negan's hand outstretched with a pack of cigarettes.
"Dude, we're in church." You reprimanded, frowning.
Negan didn't say anything, only cocking a brow and still with that same shit-eating grin. You sighed, finally giving in as you swiftly grabbed one and stashed it away in your purse for later.
"Y/N." You turned to the left, Rick's icy gaze piercing you. "Pay attention."
"R-Right. Sorry..." You mumbled sheepishly.
Carl, who was sitting in the middle of you and Rick, had dozed off. Rick nudged him, but the brunette only groaned softly and snuggled into Rick's chest. Defeated, the sheriff sighed and was just about to listen again to Gabriel before Negan cut in.
"Rick!" Negan purposely raised his voice, knowing it would get a rise out of the other man. "Didn't even see ya there. Howdy, cowboy!"
Rick grimaced, and it looked like he was just going to ignore Negan though he knew that if he did that then Negan would just irritate him even further. "Good to see you, Negan." He forced himself to say.
"Only you can say that while giving me such a deadly side eye, Grimes." Negan snickered. "How have you been? How's the wife?"
Rick flushed, his fists in a tight ball again and it looked like his nails would be digging into his skin. You abruptly swung into action, placing a hand on Rick's own.
"Rick..." You said gently. "It's okay. Calm down."
Rick did, his shoulders drooping as if a heavy weight had been lifted. He can barely pay any attention to Gabriel now, then you suddenly stood up and grabbed Negan's arm.
"We need to talk. Now."
"What, we going for a quickie?" Negan smirked, but that soon faded when he saw your serious expression. He sighed dramatically, reaching his full height as he towered over you before following you out.
At this point, you didn't care if people saw what transpired or would even start gossiping. No one, not even Negan, was allowed to harass Rick. He has helped you through so much shit--more than you'd like to admit--and you at least owed him this much.
Once outside, next to where Negan parked his motorcycle, you exploded. "What the fuck is with you?! You leave Rick alone, or I swear to fucking Christ I will--"
"Woah, woah, woah! Hold your horses, missy!" Negan guffawed, his hands up in mock surrender. "I mean, I like 'em feisty, but goddamn! Watch your fucking language."
"Tch. You're one to talk."
"Did you just scoff at me?" He raised his brows, putting his hands in his pockets as he slowly drew closer to you. A devilish grin tugged at the corners of his mouth, tilting his head slightly. "No one's ever fucking scoffed at me and didn't regret it soon after."
You frowned, letting out a huff as you met his gaze challengingly. "As if you'd do anything to me."
He was silent for several moments before chuckling, leaning back against his motorcycle. "You're right. I have too much of a soft spot for ya." He pulled out a cigarette, lighting it then taking a drag. He drew his head upwards, puffing out the smoke. "Whaddya say we just forgive and forget? I truly am sorry. You can even tell Rick that I am metaphorically down on my goddamn knees begging for forgiveness~"
"I'm not forgiving or forgetting anything until you actually face Rick and apologize yourself." You muttered. And without another word, you spun on your heel and strutted back inside the church with your head held high.
Negan's intent stare lingered where your ass had just been, taking another long drag and letting out a small laugh to himself.
His eyes suddenly glowed a crimson red, a smirk playing on his lips.
Oh, he really did pick a GREAT one.
103 notes · View notes
lemondzest · 3 years
Text
Understanding Obesity (Part 2): Whose responsibility for obesity?
Now that we know obesity is a public health crisis requiring urgent action, we may wonder - what causes it? After all, effective solutions require tackling the root causes of the problem. This part therefore aims to shed light on five of the many contributing factors to obesity. 
Tumblr media
1. Choices
Nothing much to elaborate here; choosing to eat more and moving less will result in weight gain. More calories in, less calories out - basic law of thermodynamics. Boring. However, many people are quick to go down the reductionist route by placing ALL the blame on the individual’s personal choices. If it’s just a matter of people needing to make the right choices, if it’s really that simple, we would have tackled obesity long ago. Blaming obesity solely on individual choices does not answer WHY we are increasingly eating more and moving less. Take a look at this timeline of adult obesity in the U.S below by the CDC, similarly reported in other countries across the globe. 
Tumblr media
The rate of obesity has tripled worldwide since the late 1970s. If obesity is simply caused by a lack of personal responsibility, what happened in the late 1970s? Did everyone collectively lose their rationale - maybe everyone got together, decided to YOLO and go buffet in life? Definitely possible (cue the entrance of conspiracy theorists), but highly unlikely. Did some form of transcendent power strike the DNA of humans collectively that made us evolve into a bunch of lazier and much more ravenous creatures? Scientists have studied evolutionary changes during this period and concluded that nope, our gene pool has remained constant; any changes in the gene pool would take hundreds of years to produce an obvious effect across a global population anyway. This means that:
the global rise in obesity is not because of any significant genetic changes,
people did not willingly choose to eat more and move less, 
there are other external factors that mainly drives the obesity epidemic.
Consider a class of 10 pupils. When only one pupil gets very low grades in an exam and the other nine got full marks, the one pupil is considered mainly at fault. Perhaps they need to study more and work harder to get a good mark. But when six out of ten got very low grades, is it still the pupils’ fault? Would we then tell the children to study more, while everyone else (i.e the teacher, parents, education system) just remain in inertia, or goyang kaki? 
Similarly, when 63% of the people in Brunei are living with overweight and obesity -- is it still entirely their fault? 
2. Environment
(Please bear with me, I’m trying my best not to turn this section into a whole thesis).
Tumblr media
The environment is one of the largest contributors of the rise in the obesity epidemic. This is based on rigorous academic evidence and decades of research. Essentially, the environment has generally promoted the increased consumption of unhealthier food through a rapid increase in its:
availability : since the 1970s, the food environment underwent a shift from predominantly fresh produce to a more ultra processed diet. Food are being processed to the point where they look nothing like what they originally look like, stuffing them with cheap ingredients such as sugar, salt, trans-fats and flavourings to enable mass production to be sold at cheap prices and for easy consumption. These products are called ultra processed food, and examples include soda, sausages, nuggets, sugary cereals, instant noodles, crisps, chocolates and so on.  Because of its poor nutritional profile, ultra processed food has confidently been associated with higher risks of obesity, heart disease, type 2 diabetes, cancer, depression, asthma, etc. And we, especially young people, are consuming more of this than ever. 
exposure : we're talking about the aggressive marketing strategies that has been employed especially by the fast food industry and beyond. I remember going back home from the airport after my 14-day COVID quarantine being bombarded by roughly 10 billboard ads, majority of which are advertising for fast food. As I went out and about for the next few months, I realised that we are exposed to food companies constantly fighting for our attention through their advertisements, whether in the form of billboard ads, physical outlets, leaflets, newspaper ads, TV ads, social media ads, social media influencers, event sponsorships, - the list just goes on! In fact, 46% of the annual advertising budget in the UK goes to soda, confectionery and snacks, while only 2.5% goes to fruits and vegetables. Imagine if it was the other way around.. One can only dream... The point is, we as humans are constantly being tempted with unhealthier food rather than healthier food, which in turn, drives up our purchase and consumption of unhealthier food products. I also particularly like this photo taken in the UK that just showcases the pedestal unhealthier food ads are being placed on, i.e. same level as public health ads. Oh, the irony! (Good news for Bruneians - a code of conduct on responsible food marketing has been implemented recently to shield our children from these ads! Just what we need, priority on children’s health > anything else.)
portion sizes : certain food such as pizza and soft drinks underwent a significant increase in portion sizes from the 1970s to 2000s. Just a few days ago I went to to a fast food outlet and noticed that, as usual, the default drink choice is soda, but the default size is now the large one as compared to the smaller one that I remember seeing 3 years ago before I left the country. I was also informed that some other outlets have been asking customers to upsize their drinks by default. Just how necessary is this? We may think this is not a problem because people supposedly eat according to their physiological needs and can simply stop when they’re full, and so they wouldn’t need to finish the whole portion. But research leading to the discovery of what is known as the portion size effect (PSE) has suggested otherwise; the more energy-dense food people are served, the more they tend to eat. 
The 21st century environment is also promoting physical inactivity and a sedentary lifestyle compared to the past centuries. Opportunities for physical activity especially in high-income countries have declined possibly due to the rapid urbanisation, rise in 9-5 jobs and more people relying on motorised transportation methods. Although research has shown that physical activity (PA) among adults done during free time have increased in the past ~30 years, a simultaneous decrease was found in physical activity done while working in the past ~50 years. Young people are also observed to be more physical inactive levels throughout the years, though locally... I like to think that our younger people are getting more physical-activity-conscious nowadays since applaudable efforts to widen opportunities for PA such as the launch of Bandarku Ceria and the opening of numerous hiking sites and gyms booming in 2019-2020. But this could just be my skewed perception looking at a small and specific demographic of the population - more formal research needs to be done.
So, we know that the environment is the main factor that drives up the obesity pandemic. But if we are all living in an environment which predisposes us to develop obesity, why don't we ALL have obesity? This tells us that there are other factors that makes an individual more likely to act on the environment's impulses - such as their socioeconomic status (income, education) and especially their genes.
3. Income
Research among developed countries such as the UK, Australia, Germany and Singapore has shown that people who are from lower income level are significantly associated with a higher risk of obesity. This graph below just shows how stark the inequality is between the most and least deprived areas of the UK. Note also how rapidly-widening the gap is over the years!
Tumblr media
Why are poorer families in developed countries more likely to live with obesity? 
Food that are more nutritious are often less affordable than nutritious food. I particularly love this infographic showing how in order to meet the general recommendation of a healthy diet in the UK, the poorer families would have to spend 39% of their income on food alone, while this percentage steadily decreases as your income increases, to as low as 8% for the richer families. The same pattern is reflected in many other countries including the USA, Australia and
Tumblr media
This inequality is not just seen within countries, but also across countries. One study across 18 countries identified that in order to meet the recommended guideline of 5 servings of fruits and vegetables per day, families in lower income countries would need to spend 52% of their income on them, those in middle income countries would need to spend 17% while those in higher income countries would need to spend a mere 2% of their income.
The price gap between healthier and unhealthier food can then affect people's purchasing behaviour, where families from lower income are forced to prioritise quantity of food over quality. For some of us, we are privileged enough to be able to choose food that are delicious, nutritious, and of different variety each time. But for some others, especially among families with poorer background battling food insecurity, they can only afford to eat in order to feel full and get through the day. Research has shown how poorer families always have to 1) balance out their choices of food with the utilisation of scarce resources, and 2) make judgment of food prices relative to other food prices. Combining this with the known fact discussed above that unhealthier food are FAR more aggressively marketed (almost 20 times more) than healthier food - we are left with a group of the population who are predisposed to choosing food that are mainly satiating, and less nutritious than the recommended guideline.
In fact, we know that even more factors than those discussed above can contribute to people from poorer families having an unhealthier diet. One of them is, on top of the price gap of groceries, we have the price gap of fast food. Parents who are busy and don't have much time to cook nutritious and homemade food often resort to fast food to sustain their family. Sure, we have a plethora of fast food options to choose from (and they just keep increasing - don't get me started). But what kind of fast food is both affordable and nutritious? Nasi katok costs $1 while a balanced meal costs $5 (minimum), and this disparity is seen all around the world.
Given all this, we still have the audacity to say that obesity is simply caused by a lack of willpower?
Gimme a break. It is clear that people who are not as financially privileged requires additional support in order to maintain a healthy weight. If not through finance, through education (further explained in Cause 4), or even better - both!
Side note: Despite the overwhelming evidence that having low income is associated with higher risk of obesity, there is also emerging evidence showing the possibility of the opposite (reverse causality); living with obesity is ALSO associated with having low income due to stigmatisation and discrimination. So basically... living with low income may cause people to live with obesity, and likewise living with obesity may cause people to live with low income. This syndemic is similar to the that of obesity and mental health issues discussed in Part 1.
4. Education
Health is not formally taught in most schools. Health starts at home. Because of this varying education level and awareness about health across the population, each family has very different approaches of ensuring how their family can grow up adopting healthy behaviours.
Generally, the likelihood of having obesity increases with decreasing level of education. This was observed in many countries including Taiwan, Saudi Arabia and Iran. The trend is similarly reported in OECD countries such as Australia, Canada, England and Korea as shown below.
Tumblr media
This may be because more educated families tend to have healthier lifestyles and are more aware of what the causes and consequences are of obesity. If a family is lacking awareness and knowledge on certain aspects of health, such as in nutrition - eg: what the importance of consuming enough fibre is, what exactly constitutes a balanced diet, how to cook nutritious meals under time constraint etc - then their family will be less likely to adopt healthy (protective) behaviours.
Awareness on the causes and consequences of obesity indeed remain low within many communities. In one study, 76% of young people surveyed believe that "obesity has a genetic cause and that there is nothing much one can do to prevent obesity". Almost 30% of them also believe that even when substantial changes were made to one's lifestyle, obesity cannot be prevented. In the UK, around 3 in 4 people didn't know that obesity can cause cancer - the leading cause of death worldwide.
Not only are people unaware of the causes and consequences of obesity, many people even show a general lack of understanding of obesity itself. It was found that among 401 Malaysians surveyed, 92% of those with obesity underperceive their weight, thinking that their weight is at a normal range or lower than it actually is. This is particularly concerning, because any intervention efforts to reduce obesity rate within a community will just bounce back by the majority of the target group who think the messages are 'not for them because they don't have obesity' when they actually do.
All in all, if you come from an educated family background - good for you. If you have the opportunity to study more about health, or human/medical sciences - good for you. But what about those who do not have all these privileges?
Side note: There is also evidence showing how having lower education level is not just associated with higher level of obesity in a direct manner, but also indirectly where having a low education level may contribute to households having a lower income, and as discussed above in No.3 -> may result in a stacked effect on obesity. This is called the mediation effect and more explanation can be found here (pg 133).
5. Genetics
Over 200 genes influence our body shape and size. This include genes that affects how frequently we feel hungry, the rate that we burn calories, our metabolism rate, and many more! Some of these individual genes can increase our likelihood of becoming heavier while some other genes tend to make us lighter depending on whether it is 'switched on or off'. And this mix of 'on and offs' for EACH gene is always going to be different between individuals (polymorphism).
Because of our own 'mixed bag' of ~200 obesity-related genes interacting with each other, some people will find it much harder to resist that bar of Kinder Bueno sitting on the cashier till, while some others wouldn't even bat an eye. Some people naturally feels full after a bowl of rice, while some others would need three bowls. Some people can store a large amount of fats while some others can store only half of that amount before those fats (lipids) seep into other tissues instead such as muscles and potentially cause diseases (lipotoxicity).
Our genetic differences within the population explains why some people respond differently to the obesogenic environment we live in. It is not as simple as our genes determining whether we develop obesity or not. We simply can't be saying "Oh it's in my genes, got it from my parents~" to justify our lack of effort to address obesity. There's no single gene that makes people develop obesity. But rather, our mixed bag of genes determine our susceptibility to obesity. For people with many of those genes that makes it likely for them to gain weight easily 'switched on' -> they will be more susceptible to obesity because their own biology makes it much harder for them to fight back the temptations of the obesogenic environment.
Because this concept is so difficult to be understood by people who have always had a healthy weight all their life, privileged with not having the genes raising the likelihood of obesity 'switched on' -> they tend to blame obesity solely on the individual's personal choice. Because their own biology makes it easier for them to resist the temptations of the obesogenic environment.
As Joslin - an American doctor - described almost a century ago which pretty much summarises the role of genetics in obesity:
Genetics probably loads the gun, while lifestyle in our obesogenic environment pulls the trigger for the spreading of the obesity epidemic.
Does this mean that people who have genes that makes it more susceptible to develop obesity can simply blame their genes for their weight?
No! Not entirely. They can and should apply the same general concept of weight loss to counteract the risk of obesity, i.e. - eating balanced meals, doing plenty of physical activity (going back to the boring law of thermodynamics: more calories out than in = weight loss). However, it will be especially harder for these people to achieve it due to their obesity-encouraging genes. They have to put in more effort to lose 1kg than someone with less of the obesity-encouraging genes.
What this means for those with obesity: Your own genes and biology is one of the factors why your BMI is considered high at the moment and why it feels so difficult to lose weight. It is important for you to understand this, so that you don't beat yourself up too often! It is not entirely your fault. It will be hard, and in fact it will be harder than many people, but what matters is for you stay focused in putting in the work to get there!
What this means for those with healthy weight: It's about time for you to stop blaming everything on the individual's personal choice when you don't even know how difficult they have it and how much they have been trying to fight their own biology. Don't act like you know their struggles just to shame and stigmatise them because you don't and neither do I. Leave it to their close family and personal doctor to consult them.
What this means for policymakers: We have a duty in making sure that 1) the environment is as conducive as possible to live a healthy lifestyle to avoid 'triggering the gun', and 2) people are aware that genes play a big factor too (of around 40-70%) in determining someone's weight and its not just entirely down to the individual.
Side note: The genetic explanation above which acknowledges the role of hundreds of different genes in the development of obesity is applicable to the majority of people living with obesity (polygenic obesity). However, there are also the minority of people who develop obesity due to mutation in single genes (monogenic obesity / syndromic obesity) which warrants a separate and more technical explanation.
Bottom Line
To summarise the cause of obesity:
As mentioned in Cause 1, how we develop obesity is always down to the individual eating more and moving less. But as explained in Cause 2, 3, 4 & 5, the complex interaction between the environment, the individual's socioeconomic conditions, and their own biology explains why it is so difficult for some people to eat less and move more.
To summarise the cause of the obesity pandemic:
Personal choice explains why one individual may develop obesity, but the environment explains why more people across the whole world is developing obesity. Our socioeconomic conditions and especially our genetics then explains why not ALL people develop obesity as a response to the change in environment.
So what should I do with all these information?
That's entirely up to you and how much you understood! But the reason why I brought this topic up is because I'm personally sick and tired of hearing people living with obesity being blamed for their "poor choices in life", "lack of self-control", for "being gluttonous", "lazy", etc.
As I have hopefully explained, obesity is undoubtedly very complex and a result of so many factors. These five things I mentioned above? There's. So. Much. More.
Tumblr media
Click here for a clearer view.
So the next time we blame it all on people with obesity - check your privileges. You're rich? You're naturally slim? You're educated? You don't have as much obesity-encouraging genes? Good for you. Perhaps that tends to make you feel entitled to say that people who are living with obesity just needs to make "better choices".
But understand that you have it easier in maintaining your healthy weight, while people with obesity most likely have it harder. The least you could do is really be sympathetic and understanding, acknowledge their struggles, and certainly avoid shaming and stigmatising them. Make it easier for them by providing healthier choices and support them physically and emotionally in their goals of achieving a healthy weight!
Aren't you just giving an excuse for people to live with obesity?
Disclaimer: My BMI sits quite well on the healthy range at 23 kg/m^2. I am nowhere close to having obesity, nor do I have any family members, partners or close friends living with obesity. I literally gain NOTHING to be making up an excuse for people to live with obesity. Quite on the contrary, I understand its dire consequences as I have outlined in Part 1, and I have even mentioned personal choice as one of the causes above. It's not about giving excuses, but simply an effort to give voice and justice to those who has been silenced.
I hope I have gotten my point across through this post and the previous one in my Obesity Series! Let's all be more-informed members of the society and support each other in achieving our health goals :)
*Note: For simplicity purposes, ‘unhealthier food’ in this post refers generally to food lower in nutritional profile, and food high in fat, sugar and salt (HFSS). In reality, we should understand that food does not exist in a binary manner.
Unlinked References:
Gene Eating by Giles Yeo (Book)
CMO Independent Report: Time to Solve Childhood Obesity by Professor Dame Sally Davies
7 notes · View notes
clarenceomoore · 7 years
Text
Voices in AI – Episode 7: A Conversation with Jared Ficklin
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Jared talk about rights for machines, empathy, ethics, singularity, designing AI experiences, transparency, and a return to the Victorian era.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 7: A Conversation with Jared Ficklin","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-09-28-(01-04-09)-jared-ficklin.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/09\/voices-headshot-card-6.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Jared Ficklin. He is a partner and Lead Creative Technologist at argodesign.
In addition, he has a wide range of other interests. He gave a well-received mainstage talk at TED about how to visualize music with fire. He co-created a mass transit system called The Wire. He co-designed and created a skatepark. For a long while, he designed the highly-interactive, famous South by Southwest (SXSW) opening parties which hosted thousands and thousands of people each year.
Welcome to the show, Jared.
Jared Ficklin: Thank you for having me.
I’ve got to start off with my basic, my first and favorite question: What is artificial intelligence?
Well, I think of it in the very mechanical way of, that it is a machine intelligence that has reached a point of sentience. But I think it is just a broad umbrella where we kind of apply it to any case where the computerization is attempting to solve problems with human-like thoughts or strategies.
Well, let’s split that into two halves, because there was an aspirational half of sentience, and then there was a practical half. Let’s start with the practical half. When it tries to solve problems that a person can solve, would you include a sprinkler that comes on when your lawn is dry as being an artificial intelligence? Because I don’t have to keep track of when my lawn is dry; the sprinkler system does.
First of all, this is my favorite half. I like this half of the procedural side more than the sentience side, although it’s fun to think about.
But, when you think of this sprinkler that you just talked about, there’s a couple of ways to arrive at this. One, it can be very procedural and not intelligent at all. I can have a sensor. The sensor can throw off voltage when it sees soil is of a certain dryness. That can connect on an electrical circuit which throws off a solenoid, and water begins spraying everywhere.
Now, you have the magic, and a person who doesn’t know that’s going on might look at that and say, “Holy cow! It’s intelligent! It has watered the lawn.” But it’s not. That is not machine intelligence and that is not AI. It’s just a simple procedural game.
There would be another way of doing that, and that’s to use a whole bunch of computations to study, and bring in a lot of factors of the weather coming in, the same sensor telling what soil dryness is… Run it through a whole lot of algorithms and make a decision based on the probability and the threshold of whether to turn on that sprinkler or not, and that would be a form of machine learning.
Now, if you look at the two, they seem the same on the face but they’re very different—not just in how they happen, but in the outcome. One of them is going to turn on the sprinkler, even though there are seven inches of rain coming tomorrow, and the other is not going to turn on the sprinkler because it’s aware that seven inches of rain are coming tomorrow. That little added extra judgment, or intelligence as we call it, is the key difference. That’s what makes all the difference in this, multiplied by a million times. To me.
Just to be clear, you specifically invoked machine learning. Are you saying there is no AI without machine learning?
No, I’m not saying that. That was just the strategy that applied in this situation.
Is the difference between those two extremes, in your mind, evolutionary? It’s not a black-and-white difference?
Yeah, there’s going to be scales and gradients. There’s also different strategies and algorithms that breed this outcome. One had a certain presumption of foresight, and a certain algorithmic processing. In some ways, it’s much smarter than a person.
There’s a great analogy. Matthew Santone, who is a co-worker here, is the first one who introduced me to the analogy. And I don’t know who came up with it, but it’s the ten thousand squirrels analogy around artificial intelligence in its state today.
On the face of it, you would think humans are much smarter than squirrels, and in many ways we are, but a squirrel has this particular capability of hiding ten thousand nuts in a field and being able to find them the next spring. When it comes to hiding nuts, a squirrel is much more intelligent than we are.
That’s another one of the key attributes of this procedural side of artificial intelligence, I think. It’s that these algorithms and intelligence become so focused on one specific task that they actually become much more capable and greater at it than humans.
Where do you think we are? Needless to say, the enthusiasm around AI is at a fevered pitch. What do you think brought that about, and do you think it’s warranted?
Well, it’s science fiction, I think, that has brought it about—everything from The Matrix in film, to books by John Varley or even Isaac Asimov—have given us a fascination about machines and artificial intelligence and what they can produce.
Then, right now, the business world is just talking all about it, because, I think, we’re at the level of the ten thousand squirrels. They can see a lot of value of putting those squirrels together to monitor something—you know, find those nuts in a way better than a human can. When you combine the two, it’s just on everyone’s lips and everywhere.
It doesn’t hurt that some of the bigwigs of thinkers of our time are out there talking about how dangerous it could possibly be, and that captures everyone’s attention as well.
What do you think of that? Why do you think that there are people who think we’re going to have an artificial general intelligence in a few years—five years is the earliest—and it’s something we should be concerned about? And then, there are people who say it’s not going to come for hundreds of years, and it’s not something we should be worried about. What is different in how they’re viewing the world?
It might be a reflection of the world that they live in, as well. For me, I really see two scales of danger. One is that we, as humans, put a lot of faith in machines—particularly our generation, Generation X. When I go to drive across town—and I’ve lived in my hometown of Austin, Texas, for seventeen years—I know a really good short route right through downtown. Every time I try to take it, my significant other will tell me that Google says there is a better route. We trust technology more than other humans.
The problem comes in, it’s like, if you have these ten thousand squirrels and they’re a toddler-level AI, you could turn over control far too early and end up in a very bad place. A mistake could happen, it could shut down the grid, a lot of people could die. That’s a form of danger I think some people are talking about, and they’re talking about it on the five-year scale because that’s where it’s at. You could get into that situation not because it’s more intelligent than us, but just because you put more reliance on something that isn’t actually very intelligent. That’s one possible danger that we’re facing.
The hundred-year danger is that I think people are afraid of the Hollywood scenario, the Skynet scenario, which I’m less afraid of—although I have one particular view on that that does give me some concern. I do get up every morning and tell Alexa, “Alexa, tell the robots I am on your side,” because I know how they’re programming the AI. If I write that line of code ten-thousand times, maybe I can get in the algorithm.
There are more than a few efforts underway, by one count, twenty-two different governments who are trying to figure out how to weaponize artificial intelligence. Does that concern you or is that just how things are?
Well, I’m always concerned about weaponization, but I’m not completely concerned. I think militaries think in a different way than creative technologists. They can do great damage, but they think in terms of failsafe, and they always have. They’re going to start from the position of failsafe. I’m more worried about marketing and a lot of areas where they work quick and dirty, and they don’t think about failsafe.
If you’re going to build a little bit of a neural net or a machine learning system, it’s open-sourced, it’s up on the cloud, a lot of people are using it, and you’re using it to give recommendations. And then at the end of the recommendations you’re not satisfied with it, and you say, “I know that you have recommended this mortgage from Bank A but the client is Bank B, so how can we get you to recommend Bank B?”
Essentially, teaching the machines that it’s okay to lie to humans. That is not operating from a position of failsafe. So it might just be marketing—clever terms like ‘programmatic’ and what not—that generates Skynet, and not necessarily the military industrial complex, which really believes in kill switches.
More kind of real world day-to-day worries about the technology—and we’re going to get to all the opportunities and all the benefits and all of that in just a moment.
Start with the fear.
Well, I think the fear tells us more, in a way, about the technology because it’s fun to think about. As far back as storytelling, we’ve talked about technologies that have run amok. And it seems to be this thing, that whenever we build something, we worry about it. Like, they put electricity in the White House, but then the president would never touch it and wouldn’t let his family touch it. When they put radios in cars, they said, “Oh, distracted driving, people are going to crash all the time.”
Airbags are going to kill you.
Right. Frankenstein, right? The word ‘robot’ comes to us from a Czech play.
You just hit a part of the psyche that I think people are letting in, too, when you said Frankenstein. It’s personification that often is the dangerous thing.
Think of people who dance with poisonous snakes. Sometimes it’s done as a dare, but sometimes it’s done because there’s a personification put on the animal that gives it greater importance than what it actually is, and that can be quite dangerous. I think we risk that here, too, just putting too much personification, human tendencies, on the technology.
For instance, there is actually a group of people who are advocating rights for industrial robots today, as if they are human, when they are not. They are very much just industrial machines. That kind of psyche is what I think some people are trying to inoculate now, because it walks us down this path where you’re thinking you can’t turn that thing off, because it’s given this personification of sentience before it has actually achieved it.
It’s been given this notion of rights before it actually has them. And the judgment of, even if it’s dangerous and we should hit the kill switch, there are going to be people reacting against that, saying, “You can’t kill this thing off”—even though it is quite dangerous to the species. That, to me, is a very interesting thing because a lot of people are looking at it as if, if it becomes intelligent, it will be a human intelligence.
I think that’s what a lot of the big thinkers think about, too. They think this thing is not going to be human intelligence, at which point you have to make a species-level judgment on its rights, and its ability to be sentient and put out there.
Let’s go back to the beginning of that conversation with ELIZA and Weizenbaum.
This man in the ‘60s, Weizenbaum, made this program called ELIZA, and it was a really simple chatbot. You would say, “I am having a bad day.” And it says, “Why are you having a bad day?” And then, you would say, “I’m having a bad day because of my mom.” “What did your mom do to make you have a bad day?” That’s it, very simple.
But Weizenbaum saw that people were pouring their heart out to it, even knowing that it was a machine. And he turned on it. He was like, “This is terrible.” He said, “When a machine says, ‘I understand,’ the machine is telling a lie. There is no ‘I’ there. There is nothing that understands anything.”
Is your comment about personification a neutral one? To say, “I am observing this,” or are you saying personification is a bad thing or a good thing? If you notice, Alexa got a name, Siri got a name, Cortana got a name, but Google Assistant didn’t get a name.
Start there—what are your thoughts on personification in terms of good, bad, or we don’t know yet?
In the way I was just talking about it, personification, I do think is a bad thing, and I do see it happening. In the way you just talked about it, it becomes a design tool. And as a design tool, it’s very useful. I name all my cars, but that’s the end of the personification.
You were using it to say they actually impute human characteristics on these beyond just the name?
Yes, when someone is fighting for the human rights or the labor rights of an industrial machine, they have put a deep personification on that machine. They’re feeling empathy for it, and they’re feeling it should be defended. They’re seeing it as another human or as an animal; they’re not seeing it as an industrial machine. That’s weird, and dangerous.
But, you as a designer, think, “Oh, no, it’s good to name Alexa, but I don’t want people to start thinking of Alexa as a thing.”
Yeah.
But you’re a part of that then, right?
Yeah, we are.
You’re naming it and putting a face on it.
You’ve circled right back to what I said—Skynet is going to come from product design and marketing.
From you.
Well, I did not name Alexa.
And just for the record, we’re not impugning Alexa here.
Yeah, we are not. I love Alexa. I have it, and like I said, I tell her every morning.
But, personification is this design tool, and how far is it fair for us to lean into it to make it convenient? In the same way that people name their favorite outfit, or their cars, or give their house a name—just as a convenience in their own mind—versus actually believing this thing is human and feeling empathy for it.
When I call out to Alexa in the morning, I don’t feel empathy for Alexa. I do wonder if my six-year-old son feels empathy for Alexa, and if by having that stuff in the homes—
—Do you know the story about the Japanese kids in the mall and the robot?
No.
There was this robot that was put in this Japanese mall. They were basically just trying to figure out how to make sure that the robot can get around people. The robot was programmed to ask politely for you to step aside, and if you didn’t, it would go around you.
And some kids started stepping in front of it when it tried to go around them. And then, they started bullying it, calling it names, hitting it with things. The programmers had to re-circle and say, “We need to rewrite the program so that, if there are small people, kids, and there’s more than a few, and there’s not big people around; we’ve got to program the robot to run away towards an adult.” And so, they do this.
Now, you might say, “Well, that’s just kids being kids.” But here’s the interesting thing: When they later took those kids and asked them, “Did you feel that the robot was human-like or machine-like?” Eighty-percent said it was human-like. And then, they said, “Do you feel like you caused it distress?” Seventy-five percent of them said yes. And so, these kids were willing to do that even though they regarded it as human-like and capable of feeling emotion.
They treated it like another kid.
Right. So, what do you read in the tea leaves of that story?
Well, more of the same, I’m afraid, in that we’re raising a generation—funny enough, Japan really did start this—where there needs to be familiarity with robotics. And it’s hard to separate robotics and AI, by the way. Robotics seems like the corpus of AI, and so much of what I think the public’s imagination that’s placed on AI is robotics, and has nothing to do with AI.
That is a fascinating thing to break apart, and they are starting to converge now, but back when they were doing that research, and the research like Wendy Ju does with the trash can on the public square going around, and it’s just a trashcan on wheels but it actually evokes very emotional responses from people. People personify it almost immediately even though it’s a trash can. One of the things the kids do in this case is they try and attract it with trash and say, “Come over here, come over here,” because they view it as this dog that eats trash, and they think that they can play with it. Empathy also arrives as well. Altruism arrives. There’s a great scene where this trash can falls over and a whole bunch of people go, “Aww…” and they run over and pick it up.
We’ve got to find a way to reset our natural tendencies. Technology has been our servant for all this time, and this dumb servant. And although we’re aware of it having positive and negative consequences, we’ve always thought of it as improving our experience, and we may need to adjust our thinking. The social medias might be doing that with the younger generations, because they are now seeing the great social harm that can come, and it’s like, do they put that on each other or do they put it on the platform?
But, I think some people who are very smart are painting with these broad brushes, and they’re talking about the one-hundred-year danger or the danger five years out, just because they’re struggling with how we change the way we think about technology as a companion. Because it’s getting cheaper, it’s getting more capable, and it’s invading the area of intelligence.
I remember reading about a film—I think this was in the ‘40s or ‘50s—and they just showed these college kids circles that would bounce or roll around together, or a line would come in. And they said, “What’s going on in these?” And they would personify those, they’d say, “Oh, that circle and that circle like each other.”
It’s like, if we have a tendency to do that to a circle in a film, you can only imagine that, when these robots can read your face, read your emotions—and I’m not even talking about a general intelligence—I mean something that, you know, is robotic and can read your face and it can laugh at your jokes and what not. It’s hard to see how people will be able to keep their emotions from being wrapped up in it.
Yeah, and not be tempted to explore those areas and put them into the body of capability and intelligence.
I was just reading two days ago—and I’m so bad at attribution—but a clever researcher, I think, at MIT created this program for scanning people’s social profile and looking at their profile photo… And after enough learning, building their little neural net where it’d just look at a photograph and guess whether this person was gay or not, their sexual preference, and they nail it pretty well.
I’m like, “Great, we’re teaching AI to be as shallow and presumptive as other humans, who would just make a snap judgment based on what you look like, and maybe it’s even better than us at doing it.”
I really think we need to develop machine ethics, and human ethics, and not be teaching the machine the human ethics, even if that’s a feature on the other side. And that’s more important than privacy.
Slow that down a second. When you do develop a difference between human ethics and machine ethics, I understand that; and then, don’t teach the machine human ethics. What does that mean?
We don’t need more capable, faster human ethics out of there. It could be quite damaging.
How did you see that coming about?
Like I said, it comes about through, “I’m going to create a recommendation engine.”
No, I’m sorry—the solution coming about.
Yeah.
Separating machine and human ethics.
We have this jokey thought experiment called “Death by 4.7 Stars”, where you would assume that there is a Skynet that has come to intelligence, and it has invaded recommendation engines. And when you ask it, “What should I have for lunch?”, it suggests that you have this big fatty hamburger, a pack of Lucky Strikes, and a big can of caffeinated soda.
At this point, you die of a heart attack younger. Just by handing out this horrible advice, and you trusting it implicitly, and it not caring that it’s lying to you, you just extinguish all of humanity. And then Skynet is sitting there going, “That was easy. I thought we were going to have a war between humans and machines and have to build the Matrix. Well, we didn’t have to do that.” Then, one of the AIs will be like, “Well, we did have to tell that lady to turn left on her GPS into a quarry.” And then, the AI is like, “Well, technically, that wasn’t that hard. This was a very easy war.”
So, that’s why we need to figure out this way to put a machine ethic in there. I know it seems old-fashioned. I’m a big fan of Isaac Asimov. I think he did some really good work here, and there’s other groups that are now advancing that and saying, “How can we put a structure in place where we just don’t give these robots a code of ethic?”
And then, the way you actually build these systems is important, too. AI should always come to the right conclusion. You should not then tell it, “No, come to this conclusion.” You should just screen out conclusions. You should just put a control layer in that filters out the conclusions you don’t want for your business purposes, but don’t build a feedback loop back into the machine that says, “Hey, I need you to think like my business,” because your business might need a certain amount of misdirection and non-truths to it.
And you don’t, maybe, understand the consequences because there’s a certain human filter between that stuff—what we call ‘white lies’ and such—that allows us to work. Whereas, if you amplify it times the million circuits and the probabilities that go down to the hundreds of thousands of links, you don’t really know what the race condition is going to produce with that small amount of mistruth.
And then, good governance and controls that say that little adjusted algorithm, which is very hard to ferret out—almost like the scene from Tron where they’re picking out the little golden strands—doesn’t move into other things.
And so, this is the kind of carefulness that we need to put into it as we deploy it, if we’re going to be careful as these magic features come along. And we want the features. There’s a whole digital lifestyle predicated on the ability for AI to establish context, that’s going to be really luxurious and awesome; and that’s one reason why I even approach things like the singularity, or “only you can prevent Skynet,” or even get preachy about it at all—because I want this stuff.
I just got back from Burning Man, and you know, Kathryn Myronuk says it’s a dress rehearsal for a post-scarcity society. What’s going to give us post-scarcity is artificial intelligence. For a large part, the ability to stand up machines enough to supply our needs, wants, and desires, and to sweep away the lower levels of Maslow’s hierarchy of need.
And then we can live in just a much more awesome society. Even before that, there’s just a whole bunch of cool features coming down the pipeline. So, I think that’s why it’s important to have this discussion now, so we can set it up in a way that it continues to be productive, trustful, and it doesn’t put the entire species in danger somehow, if we’re to believe Stephen Hawking or Elon Musk.
Another area that people are concerned about, obviously, are jobs—automation of jobs. There are three narratives, just to set them up for the listener:
The first is that AI is going to take a certain class of jobs that are ‘low-skill’ jobs, and that the people who have those jobs will be unemployed and there’ll be evermore of them competing for ever fewer low-skill jobs, and we’ll have a permanent Great Depression.
There’s a second area that says, “Oh, no, you don’t understand, everybody’s job—your job, my job, the President’s job, the speechwriter’s job, the artist’s job, everybody—because once the machines can learn something new faster than we can, it’s game over.”
And then, there’s a third narrative that says both of these are wrong. Every time we have a new technology, no matter how disruptive it is to human activity—like electricity or engines or anything like that—people just take that technology and they use it to magnify their own productivity. And they raise their wages and everybody uses the technology to become more productive, and that’s the story of the last two hundred and fifty years.
Which of those three scenarios, or a fourth one, do you identify with?
A fourth one, where the burden of productivity being the guide of work is released, or lessened, or slackened. And then, the people’s jobs who are at the most danger are the people who hate their jobs. Their jobs are at the most danger. Those are the ones that AI is going to take over first and fastest.
Why is that not my first setup, which is there are some jobs that it’s going to take over, putting those people out of work?
Because there will be one guy who really loves driving people around in his car and is very passionate about it, and he’ll still drive his car and we’ll still [get] into it. We’ll call the human car. He won’t be forced out of his job because he likes it. But the other hundred guys who hated driving a car for a living, their job will be gone because they weren’t passionate enough to protect it or find a new way to do it or enjoy doing it anymore. That’s the slight difference, I think, between what I said and what you said.
You say those hundred people won’t use the technology to find new employment?
I think an entire economy of a different kind of employment that works around passion will ultimately evolve. I’m not going to put a timescale on this, but let’s take the example of “ecopoesis,” which I’m a big fan of, which comes out of Stanley K. Robinson’s Mars. But probably before that was one of the first times I encountered it.
Ecopoesis is a combination of ecology poet – ecopoesis. If you practice it, you’re an ecopoet. This is how it would work in the real world, right? We would take Bill Gates’s proposal, and we would tax robots. Then we would take that money, and we would place an ad on Craigslist, and say, “I would need approximately sixty thousand people who I can pay $60,000 a year to go into the Lincoln National Forest, and we want you to garden the thing. We want you to remove the right amount of deadfall. We want you to remove evasive species. We want you to create glades. We want for the elk to reproduce. We want you to do this on the millions of hectares that is the Lincoln National Forest. In the end, we want it to look like Muir Woods. We want it to be just the most gorgeous piece of garden property possible.”
How many people who are driving cars today or working as landscapers wouldn’t just look at that Craigslist ad and immediately apply for the opportunity to spend the next twenty years of their life gardening this one piece of forest, or this one piece of land, because they’re following their passion into it and all of society benefits from it, right? That’s just one example of what I mean.
I think you can begin a thought experiment where you can see whole new categories of jobs crop up, but also people who are so passionate in what they’re doing now that they simply don’t let the AI do it.
I was on a cooking show once. I live a weird life. While we were on it we were talking about robots taking jobs, just like you and I were. We were talking about what jobs will robots take. Robots could take the job of a chef. The sous chef walks out of the back and he says, “No, it won’t.” We’re like, “Oh, you’re with nerds discussing this. What do you mean, ‘No, it won’t’?” He’s like, “Because I’ll put a knife in its head, and I will keep cooking.”
That’s a guy who’s passionate about his job. He’s going to defend it against the robots and AI. People will follow that passion and see value in it and pursue it.
I think there’s a fourth one that’s somewhere between one and three, that is what comes out of this. Not that there won’t be short-term disruption or pain but, ultimately, I think what will happen is humanity will self-actualize here, and people will find jobs they want to do.
Just to kind of break it down more a bit, that sounds like WPA or the Depression.
Yeah.
It says, “Let’s have people paint murals, build bridges, plant saplings.”
There was a lot of that that went on, yeah.
And so, you advocate for that?
I think that that is a great bridge when we’re in that point between post-singularity—or an abundance society, post-scarcity—and we’re at this in-between point. Even before that, in the very near-term, a lot of jobs are going to be created by the deployment of AI. It actually just takes a whole lot of work to deploy and it doesn’t necessarily reverberate into removing a bunch of jobs. Often, it’s a very minute amount of productivity it adds to a job, and it has an amplifying effect.
The industry of QA is going to explode. Radiologists, their jobs are not going to be stolen; they’re going to be shifted to the activity of QA to make sure that this stuff is identifying correctly in the short term. Over the next twenty to fifty years, there’s going to be a whole lot of that going on. And then, there’s going to be just a whole lot of robotics fleet maintenance and such, that’s going to be going on. And some people are going to enjoy doing this work and they’ll gravitate to it.
And then, we’re going to go through this transition where, ultimately, when the robots start taking care of something really lower-level, people are going to follow their passions into higher-level, more interesting work.
You would pay for this by taxing the robots?
Well, that was Bill Gates’s idea, and I think there’s a point in history where that will function. But ultimately, the optimistic concept is that this revolution will bring about so much abundance that the way an economy works itself will change quite a bit. Thus, you pay for it out of just doing it.
If we get to the point where I can stick out my hand, and a drone drops a hammer when I need a hammer to build something, how do you pay for that transaction? If that’s backed with a Tokamak Reactor—we’ve created fusion and energy is superfluous—how do you pay for that? It’s such a miniscule thing that there just might not be a way to pay for it, that paying for things will just completely change altogether.
You are a designer.
I’m a product designer, yes. That’s what I do by trade.
So, how do you take all of that? And how does that affect your job today, or tomorrow, or what you’re doing now? What are the kinds of projects you’re doing now that you have to apply all of this to?
This is how young it actually is. I am currently just involved in what does the tooling look like to actually deploy this at any kind of scale. And when I say “deploy,” I don’t mean sentience or anything close to it; but just something that can identify typos better than the current spellcheck system. Or identify typos in a very narrow sphere of jargon that other people know. Those are the problems being worked on right now. We’re scraping pennies outside of dollars, and it just needs a whole lot of tooling on that right now.
And so, the way I get to apply this, quite fundamentally, is to help influence what are the controls, governance, and transparency going to look like, at least in the narrow sphere where I’m working with people. After that, it’s all futurism, my friend.
But, on a day-to-day basis at argo, where do you see designing for this AI world? Is it all just down to the tooling area?
No, that’s just one that’s very tactical. We are actually doing that, and so it’s absorbing a lot of my day.
We have had a few clients come in and be like, “How do I integrate AI?” And you can find out it’s a very ticklish problem of like, “Is your business model ready for it? Is your data stream ready for it? Do you have the costing ability to put it all together?” It’s very easy to sit back and imagine the possibilities. But, when you get down to the brass tacks of integration and implementation, you start realizing it needs more people here to work on it.
Other than putting out visions that might influence the future, and perhaps enter into the zeitgeist our opinion on how this could transpire, we’re really down in the weeds on it, to be honest.
In terms of far out, you’ve referred to the singularity a number of times, do you believe in Kurzweil’s vision of the singularity?
I actually have something that I call “the other singularity”. It’s not as antagonistic as it sounds. It’s meant like the other cousin, right? While the singularity is happening—his grand vision, which is very lofty—there’s this other singularity going on. This one of cast-offs of the exponential technology curve. So, as computational power gets less expensive, yesterday’s computer—the quadcore computer that I first had for $3,000—is now like a $40 gum stick, and pretty soon it’s going to be a forty-cent MCU computer on a chip.
At that point, you can apply computational power to really mundane and ordinary things. We’re seeing that happen at a huge pace.
There’s something I like to call the “single-function computer” and the new sub-$1000. In the ‘90s, when computers were out there… They were out there for, really, forty, fifty years before mass adoption hit. From a marketing perspective, it was said that, until a price comes below $1,000 for a multifunction computer, they won’t reach adoption. Soon as it did, they spread widely.
We still buy these sub-$1000 computers. Some of us buy slightly more in order to get an Apple on the front of them and stuff, but the next sub-$1000 is how to get a hundred computers in the home for under $1,000 and that’s being worked on now.
What they’re going to do is take the function of these single-function computers, which take a massive amount of computational power, and dedicate them to one thing. The Nest would be my first example that people are most familiar with. It has the same processing power as the original MacBook G4 laptop, and all that processing power is just put to algorithmically keeping your home comfortable in a very exquisite out-of-the-box experience.
We’re seeing more and more of these experiences erupt. But they’re not happening in this elegant, singularity, intelligence-fed path. They just do what they do procedurally, or with a small amount of intelligence, and they do it extremely well. And it’s this big messy mess, and it’s entirely possible that we reach a form of the singularity without sentient artificial intelligence guiding it.
An author that I really love that works in this space a lot is Cory Doctorow. He has a lot of books that kind of propose this vision where machines are somehow taking care of this lower level of Maslow’s hierarchy of needs, and creating a post-scarcity society, but they are not artificial intelligence. They have no sentience. They’re just very, very capable at what they do, and there’s a profundity of them to do a lot of things.
That’s the other singularity, and that’s quite possibly how it may happen, especially if we decide that sentience is so dangerous [that] we don’t need it. But I find it really encouraging and optimistic, that there is this path to the future that does not quite require it, but could still give us a lot of what we see in these singularity-type visions of the future—the kind of abundance, and ability to not be toiling each day for survival. I love that.
I think Kurzweil thinks that the singularity comes about because of emergence.
Yeah.
Because, at some point, you just bolt enough of this stuff together and it starts glowing with some emergent behavior, that it is at a conscious decision that we decide, “Let’s build.”
Yeah, the exponential technology curve predicts the point at which a computer can have the same number of computations as we have neurons, right? At which point, I agree with you, it kind of implies that sentience will just burst forth.
Well, that’s what he says.
Yeah.
That’s the question, isn’t it?
I don’t think it happens that way.
What do you think happens?
I don’t think sentience just bursts forth at that moment.
First of all, taking a step back, in what sense are you using the word ‘sentience’? Strictly speaking, it means ‘able to sense something, able to feel’—that’s it. Then, there’s ‘sapience’, which is intelligent. That’s what we are, homo sapiens. Then, there’s ‘consciousness’, which is the ability to have subjective experience—that tea you just drank tasted like something and you tasted it.
In what sense are you thinking of computers—not necessarily having to be that?
Closer to the latter. It’s something that is aware of itself and begins guiding its own priorities.
You think we are that. We have that, humans.
Yeah.
Where do you think it comes from? Do you think it’s an emergent property of our brains? Is it something we don’t know? Do you have an opinion on that?
I mean, I’m a spiritualist, so I think it derives from the resonance of the universe that was placed there for a reason.
In that view of the world, you can’t manufacture that, in other words. It can’t come out of the factory and someplace.
To be metaphysical, yes. Like Orson Scott Card, will the philotics plug into the machine, and suddenly it wakes up and it has the same cognitive powers as a human? Yeah, I don’t know.
What you do, which is very interesting, is you say, “What if that assumption—that one assumption—that someday the machine kind of opens its eyes; what if that one assumption isn’t true?” Then what does the world look like, of ever-better computers that just do their thing, and don’t have an ulterior motive?
Yeah, and the truth is they could also happen in parallel. Both could be happening at the same time, as they are today, and still progress. But I think it’s really fascinating. I think some people guard themselves. They say, “If this doesn’t happen, there’s nothing smart enough to make all the decisions to improve humanity, and we’re still going to have to toil away and make them.” And I say, “No, it might be entirely possible that there’s this path where just these little machines, and profundity do it for us and sentience is not necessary.”
It also opens up the possibility that, if sentience does just pop into existence right now, it makes very fair the debate that you could just turn it off, that you could commit the genocide of the machine and say, “We don’t want you or need you. We’re going to take this other path.”
We Skynet them.
We Skynet them, and we keep our autonomy and we don’t worry about the perils. I think part of the fear about this kind of awareness—we’ve been calling it sentience—kind of theory on AI, is this fear that we just become dependent on them, and subservient to them, and that’s the only path. But I don’t think it is.
I think there’s another path where technology takes us to a place of great capability so profound that it even could remove the base layer of Maslow’s hierarchy of needs. I think of books like Makers by Cory Doctorow and others that are forty years in the future, and you start thinking of micro-manufacturing.
We just put up this vision on Amazon and Whole Food, which was another nod towards this way of thinking. That ignoring the energy source a little bit—because we think it’s going to sort itself out, everyone has solar on their hands or Tokamak—if you can get these hydroponic gardens into everyone’s garage, produce is just going to be so universally available. It goes back to being the cheapest of staples. Robots could reduce spoilage by matching demand, and this would be a great place for AI to live.
AI is really good at examining this notion of like, “I think you’re going to use those Brussels sprouts, or I think your neighbor is going to use them first.” We envision this fridge that has a door on the outside, which really solves a lot of delivery problems. You don’t need those goofy cardboard boxes with foil and ice in them anymore. You just put it in the fridge. It also can move the point of purchase all the way into the home.
When you combine that with the notion of this dumber AI that’s just sitting there, deciding whether you or the neighbor needs Brussels sprouts, it can put the Brussels sprouts there opportunistically, thinking, “Maybe he’ll get healthy this week.” When I don’t take them before they spoil, it can move them over to the neighbor’s fridge where they use [them]. You just root so much spoilage out of the system, that nutrition just raises and it becomes more ubiquitous.
Now, if people wanted to harvest those goods or tend those gardens, they could. But, if people didn’t, robots could make up the gap. Next thing you know, you have a food system that’s decoupled from the modern manufacturing system, and is scalable and can grow with humanity in a very fascinating way.
Do you think we’re already dependent on the machine? Like, if an EMP wave just fried all of our electronics, a sizeable part of the population dies?
I think that’s very likely. Ignoring all the disaster and such right then, it would take a whole lot of… I don’t necessarily think that’s purely a technological judgment. It’s just the slowness of humanity to change their priorities. In other words, we would realize too late that we all needed to rededicate our resources to a certain kind of agriculture, for instance, before the echo moved through the machine. That would be my fear on it—that we all engrain our habits and we’re too slow to change them.
Way to kill off humanity three times in this podcast!
That’s right.
Does that happen in most of these that you are doing?
No.
Oh, great! It’s just my dark view.
It’s really hard to kill us off, isn’t it?
Yeah.
Because, if it was going to happen, it seems like it would have happened before when we had no technology. You know, there were just three million of us five-thousand years ago. By some counts, thousands of us, at one time, and wooly mammoths running around.
But back then, ninety-nine percent of our technology was dedicated to survival, and it’s a way lower percentage now. In fact, we invented a percentage of technology that is dedicated to our destruction. And so, I don’t know how much the odds have changed. I think it’s a really fascinating discussion—probably something that AI can determine for us.
Well, I don’t know the percentage. It would be the gross amount, right?
Yeah.
Because you could say the percentage of money we’re spending on food is way down, but that doesn’t mean we’re eating less. The percentage of money we’re spending on survival may be way down, but that doesn’t mean we’re spending less.
Yeah.
In a really real-world kind of way, there’s a European initiative that says: When an AI makes a decision that affects you, you have a right to know why it made that decision. What do you think of that? I won’t impute anything. What do you think of that?
Yeah, I think Europe is ahead of us here. The funny thing is a lot of that decision was reported as rights for AI, or rights for robots. But when you really dig into it, it’s rights for humans. And they’re good rights.
If I were to show you designs out of my presentations right now, I have this big design that’s… You’re just searching for a car and it says, “Can I use your data to recommend a car?” and you click on that button and say yes. That’s the way it should be designed. We have taken so many liberties with people’s data and privacy up until now, and we need to start including them in on the decision.
And then, at the bottom of it, it has a slider that says, “The car you want, the car your wife wants.” You should also have transparency and control of the process, right? Because machine learning and artificial intelligence produces results with this kind of context, and you should be allowed to change the context.
First of all, it’s going to make for a better experience because, if it’s looking at all my data historically, and it’s recommended to me the kind of sleeping bag I should buy, it might need to be aware—and I might have to make it aware—that I’m moving to Alaska next week, because it would make a different recommendation. This kind of transparency in government actually… And I also think they put in another curious thing—and we’ll see how it plays out through the court—but I believe they also said that, if you get hurt by it—this was the robotic side—the person who made the robot is responsible for it.
Some human along the way made a decision that hurt you is the thesis.
Yes, or the business corpus that put this robot out there is responsible for it. It’s the closest thing to the three laws of robotics or something put into law that we’ve seen yet. It’s very advanced thinking, and I like it; and it’s already in our design practice.
We’re already trying to convince clients that this is the way to begin designing experiences. More than that, we’re trying to convince our fellow designers, because we have a certain role in this, that we can utilize to design the experiences so that they are open and transparent to the person using them. That little LED green lights says, “AI is involved in this decision,” so you might judge that differently.
But where does that end? Or does that inherently limit the advancement of the technology? Because you could say, “I rank number two in Google for some search—some business-related search—and somebody else ranks number one.” I could go to Google and say, “Why do I rank number two and they rank number one?” Google could, in all fairness, say, “We don’t know.”
Yeah, that’s a problem.
And so, do you say, “No, you have to know. You’ve got to limit the technology until you can answer that question,” or do you just say, “We don’t know how people make decisions.” You can’t ask the girl why she didn’t go out with you. “Why aren’t you going out with me?” That affects me. It’s like, “I’m just not going to.”
You’ve framed the consumer’s dilemma in everything from organic apples to search results, and it’s going to be a push-and-pull.
But I would say, yeah, if you’re using artificial intelligence, you should know a little bit about how it’s being produced, and I think there’ll be a market for it. There’s going to be a value judgment on the other side. I really think that some of the ways we’re looking at designing experiences, it’s much more valuable to the user to see a lot of these things and know it—to be able to adjust the rankings based on the context that they’re in, and they’re going to prefer that experience.
I think, eventually, it’ll all catch up in the end.
One last story, I used to sell snowboards. So much of this is used for commerce. It’s an easy example for us to understand, retail. I used to sell snowboards, and I got really good at it. My intelligence on it got really focused. I was at a pretty good hit rate. Someone could walk in the door, and if I wrote down what snowboard they were going to buy, I was probably right eighty-five to ninety-percent of the time. I got really good at it. By the end of the season, you just know.
But, if I walked up to any of those people and said, “Here’s your snowboard,” I would never make a sale. I would never make a sale. It creeps them out, they walk away, the deal is not closed. There’s a certain amount of window dressing, song and dance, gathering of information to make someone comfortable before they will make that decision to accept the value.
Up until now, technology has been very prescriptive. You write the code, it does what the code says. But that’s going to change, because the probabilities and the context-gathering goes away. But to be successful, there is still going to have to be that path, and it’s the perfect place to put in what we were just talking about—the transparency, the governance, and the guidance to the consumer to let them know that they’re [in] on that type of experience. Why? You’re going to sell more snowboards if you do.
In your view of a world where we don’t have this kind of conscious AGI, we’re one notch below that, will those machines still pass the Turing test? Will you still be able to converse with them and not know that it’s a computer you’re talking to?
I think it’ll get darn close, if not all the way there. I don’t think you could converse with them as much as people imagine though.
Fair enough. I’m going to ask you a privacy question. Right now, privacy is largely buried on just the sheer amount of data. Nothing can listen to every phone conversation. Nothing can do that. But, once a machine can listen to them all, then it can.
Then, we can hear them all right now, but we can’t listen to them all.
Correct. And I read that you can now get human-level lip-reading from cameras, and you get facial recognition.
Yeah.
And so you could understand that, eventually, that’s just a giant data mining problem. And it isn’t even a nefarious one, because it’s the same technology that recommends what you should buy someplace.
Yeah.
Tell me what you think about privacy in a world where all of that information is recorded and, I’m going to use ‘understood’ loosely, but able to be queried.
Yeah, this is the, “I don’t want a machine knowing what I had for lunch,” question. The machine doesn’t care; people care. What we have to do is work to develop a society where privacy is a virtue, not a right. When privacy is a right, you have to maintain it through security. The security is just too fallible, especially given the modern era.
Now, there’ll always be that certain kind of thing, but privacy-as-a-virtue is different. If you could structure society where privacy is a virtue, well, then it’s okay that I know what you have on lunch. It’s virtuous for me to pretend like I don’t know what you had for lunch, to not act on what I know you had for lunch, and not allow it to influence my behavior.
It sounds almost Victorian and I think there is a reason that, in the cyberpunk movement in science fiction, you see this steampunk kind of Victorian return. In the Victorian era, we had a lot of etiquette based on just the size of society. And the new movement of information meant that you knew a lot about people’s business that you didn’t know anymore. And the way we dealt with it was this kind of really pent-up morality where it was virtuous to pretend like you didn’t know—almost to make it as a game and not allow it to influence your decision-making. Only priests do this anymore.
But we’re all going to have to pick up the skill and train our children, and I think they’re training themselves to do it, frankly, right now, because of the impacts of social media on their lives. We might return to this second Victorian era, where I know everything about you but it’s virtuous.
Now, that needs to bleed into the software and the hardware architectures as well. Hard drives need to forget. Code algorithms need to forget, or they need to decide what information they treat as virtuous. This way, we can have our cake and eat it, too. Otherwise, we’re just going to be in this weird security battle forever, and it’s not going to function. The only people who are going to win in that one are the government. We’re just going to have to take it back in this manner.
Now, you can just see how much optimism bleeds through me when I say it this way, and I’m not totally incognizant of my optimism here, but I really think that’s the key to this. Any time we’re faced with the feature, we just give up our privacy for it. And so, we may as well start designing the world that can operate with less privacy-as-a-right.
It’s funny, because I always hear this canard that young people don’t care about privacy, but that’s not my experience. I have four kids. My oldest son always comes in and says, “How can you use that? It’s listening to everything you’re doing.” Or, “How do you have these settings on your computer the way you do?” I’m like, “Yeah, yeah, well…” But you say, not only do they value it more, but they’re learning etiquette around it as well.
Yeah, they’re redefining it.
They see what their friends did last night on social media, but they’re not going to mention it when they see them.
That’s right, and they’re going to monitor their own behavior. They just have to in order to function socially. We as creatures need this. I think we grew up in a more unique place. It’s goofy, but I lived in 1867. You had very little privacy in 1867.
That’s right. You did that PBS thing.
Yeah, I did that PBS thing, that living history experiment. Even though it’s fourteen people, the impacts of a secret or something slipping out could be just massive, but everyone has that impact. There was an anonymity that came from the Industrial Revolution that we, as Gen Xers, probably enjoy the zenith of, and we’ve watched social media pull it back apart.
But I don’t think it’s a new thing to humanity, and I think ancestral memory will come back, and I think we will survive it just fine.
In forty-something guests, you’ve referred to science fiction way more than even the science fiction writers I have on the show.
I’m a fanboy.
Tell me what you think is really thoughtful. I think Frank Herbert said, “Sometimes, the purpose of science fiction is to keep the future from happening.”
Yes.
Tell me some examples. I’m going to put you on the spot here.
I just heard that from Cory Doctorow two weeks ago, that same thing.
Really? I heard it because I used to really be annoyed by dystopian movies, because I don’t believe in them, and yet I’m required to see them because everybody asks me about them. “Oh, my gosh, did you see Elysium?” and I’m like, “Yes, I saw Elysium.” And so, I have to go see these and they used to really annoy me.
And then, I saw that quote a couple of years ago and it really changed me, because now I can go to them and say, “Ah, that’s not going to happen.”
Anyway, two questions: Are there any futures that you have seen in science fiction that you think will happen? Like, when you look at it, you say, “That looks likely to me,” because it sounds like you’re a Gene Roddenberry futurist.
I’m more of a Cory Doctorow futurist.
And then, are there ones you have seen that you think could happen, but you don’t think it’s going to happen, but it could?
I’m still on the first question. In my recent readings, the whole Stanley K. Robinson and Cory Doctorow works are very good.
Now, let’s talk about Iain M. Banks, the whole Culture series, which is so far-future, and so grand in scale, and so driven by AI that knows it’s superior to humans—but is fascinated with them. Therefore, it doesn’t want to destroy them but rather to attach themselves to society. I don’t think that is going to happen but it could happen. It’s really fascinating.
It’s one of those bigger-than-the-galaxy type universes where you have megaships that are mega-AIs, and can do the calculations of a trillion humans in one second, and they keep humans around for two reasons… And this is how they think about it: One, they like them, they’re fascinating and curious; and two, there are thirteen of them that, by sheer random number, they’re always right. Therefore, they need a certain density of humanity just so they can consult them when they can’t come up with an answer of enough certainty.
So, there are thirteen humans that are always right.
Yeah, because there are so many trillions and trillions of them. And the frustrating thing to these AI ships is, they can’t figure out why they’re always right, and no one has decided which theory is correct. But the predominant leading theory is that they’re just making random decisions because there are so many humans, these thirteen random decisions happen to always be correct. And the humans themselves, we get a little profile of one of them and she’s rather depressed, because we can’t be fatalists as a species.
Jared, that is a wonderful place to leave this. I want to thank you for a fascinating hour. We have covered, I think, more ground than any other talk I’ve had, and I thank you for your time!
Thank you! It was fun!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
0 notes
babbleuk · 7 years
Text
Voices in AI – Episode 7: A Conversation with Jared Ficklin
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Jared talk about rights for machines, empathy, ethics, singularity, designing AI experiences, transparency, and a return to the Victorian era.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 7: A Conversation with Jared Ficklin","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-09-28-(01-04-09)-jared-ficklin.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/09\/voices-headshot-card-6.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Jared Ficklin. He is a partner and Lead Creative Technologist at argodesign.
In addition, he has a wide range of other interests. He gave a well-received mainstage talk at TED about how to visualize music with fire. He co-created a mass transit system called The Wire. He co-designed and created a skatepark. For a long while, he designed the highly-interactive, famous South by Southwest (SXSW) opening parties which hosted thousands and thousands of people each year.
Welcome to the show, Jared.
Jared Ficklin: Thank you for having me.
I’ve got to start off with my basic, my first and favorite question: What is artificial intelligence?
Well, I think of it in the very mechanical way of, that it is a machine intelligence that has reached a point of sentience. But I think it is just a broad umbrella where we kind of apply it to any case where the computerization is attempting to solve problems with human-like thoughts or strategies.
Well, let’s split that into two halves, because there was an aspirational half of sentience, and then there was a practical half. Let’s start with the practical half. When it tries to solve problems that a person can solve, would you include a sprinkler that comes on when your lawn is dry as being an artificial intelligence? Because I don’t have to keep track of when my lawn is dry; the sprinkler system does.
First of all, this is my favorite half. I like this half of the procedural side more than the sentience side, although it’s fun to think about.
But, when you think of this sprinkler that you just talked about, there’s a couple of ways to arrive at this. One, it can be very procedural and not intelligent at all. I can have a sensor. The sensor can throw off voltage when it sees soil is of a certain dryness. That can connect on an electrical circuit which throws off a solenoid, and water begins spraying everywhere.
Now, you have the magic, and a person who doesn’t know that’s going on might look at that and say, “Holy cow! It’s intelligent! It has watered the lawn.” But it’s not. That is not machine intelligence and that is not AI. It’s just a simple procedural game.
There would be another way of doing that, and that’s to use a whole bunch of computations to study, and bring in a lot of factors of the weather coming in, the same sensor telling what soil dryness is… Run it through a whole lot of algorithms and make a decision based on the probability and the threshold of whether to turn on that sprinkler or not, and that would be a form of machine learning.
Now, if you look at the two, they seem the same on the face but they’re very different—not just in how they happen, but in the outcome. One of them is going to turn on the sprinkler, even though there are seven inches of rain coming tomorrow, and the other is not going to turn on the sprinkler because it’s aware that seven inches of rain are coming tomorrow. That little added extra judgment, or intelligence as we call it, is the key difference. That’s what makes all the difference in this, multiplied by a million times. To me.
Just to be clear, you specifically invoked machine learning. Are you saying there is no AI without machine learning?
No, I’m not saying that. That was just the strategy that applied in this situation.
Is the difference between those two extremes, in your mind, evolutionary? It’s not a black-and-white difference?
Yeah, there’s going to be scales and gradients. There’s also different strategies and algorithms that breed this outcome. One had a certain presumption of foresight, and a certain algorithmic processing. In some ways, it’s much smarter than a person.
There’s a great analogy. Matthew Santone, who is a co-worker here, is the first one who introduced me to the analogy. And I don’t know who came up with it, but it’s the ten thousand squirrels analogy around artificial intelligence in its state today.
On the face of it, you would think humans are much smarter than squirrels, and in many ways we are, but a squirrel has this particular capability of hiding ten thousand nuts in a field and being able to find them the next spring. When it comes to hiding nuts, a squirrel is much more intelligent than we are.
That’s another one of the key attributes of this procedural side of artificial intelligence, I think. It’s that these algorithms and intelligence become so focused on one specific task that they actually become much more capable and greater at it than humans.
Where do you think we are? Needless to say, the enthusiasm around AI is at a fevered pitch. What do you think brought that about, and do you think it’s warranted?
Well, it’s science fiction, I think, that has brought it about—everything from The Matrix in film, to books by John Varley or even Isaac Asimov—have given us a fascination about machines and artificial intelligence and what they can produce.
Then, right now, the business world is just talking all about it, because, I think, we’re at the level of the ten thousand squirrels. They can see a lot of value of putting those squirrels together to monitor something—you know, find those nuts in a way better than a human can. When you combine the two, it’s just on everyone’s lips and everywhere.
It doesn’t hurt that some of the bigwigs of thinkers of our time are out there talking about how dangerous it could possibly be, and that captures everyone’s attention as well.
What do you think of that? Why do you think that there are people who think we’re going to have an artificial general intelligence in a few years—five years is the earliest—and it’s something we should be concerned about? And then, there are people who say it’s not going to come for hundreds of years, and it’s not something we should be worried about. What is different in how they’re viewing the world?
It might be a reflection of the world that they live in, as well. For me, I really see two scales of danger. One is that we, as humans, put a lot of faith in machines—particularly our generation, Generation X. When I go to drive across town—and I’ve lived in my hometown of Austin, Texas, for seventeen years—I know a really good short route right through downtown. Every time I try to take it, my significant other will tell me that Google says there is a better route. We trust technology more than other humans.
The problem comes in, it’s like, if you have these ten thousand squirrels and they’re a toddler-level AI, you could turn over control far too early and end up in a very bad place. A mistake could happen, it could shut down the grid, a lot of people could die. That’s a form of danger I think some people are talking about, and they’re talking about it on the five-year scale because that’s where it’s at. You could get into that situation not because it’s more intelligent than us, but just because you put more reliance on something that isn’t actually very intelligent. That’s one possible danger that we’re facing.
The hundred-year danger is that I think people are afraid of the Hollywood scenario, the Skynet scenario, which I’m less afraid of—although I have one particular view on that that does give me some concern. I do get up every morning and tell Alexa, “Alexa, tell the robots I am on your side,” because I know how they’re programming the AI. If I write that line of code ten-thousand times, maybe I can get in the algorithm.
There are more than a few efforts underway, by one count, twenty-two different governments who are trying to figure out how to weaponize artificial intelligence. Does that concern you or is that just how things are?
Well, I’m always concerned about weaponization, but I’m not completely concerned. I think militaries think in a different way than creative technologists. They can do great damage, but they think in terms of failsafe, and they always have. They’re going to start from the position of failsafe. I’m more worried about marketing and a lot of areas where they work quick and dirty, and they don’t think about failsafe.
If you’re going to build a little bit of a neural net or a machine learning system, it’s open-sourced, it’s up on the cloud, a lot of people are using it, and you’re using it to give recommendations. And then at the end of the recommendations you’re not satisfied with it, and you say, “I know that you have recommended this mortgage from Bank A but the client is Bank B, so how can we get you to recommend Bank B?”
Essentially, teaching the machines that it’s okay to lie to humans. That is not operating from a position of failsafe. So it might just be marketing—clever terms like ‘programmatic’ and what not—that generates Skynet, and not necessarily the military industrial complex, which really believes in kill switches.
More kind of real world day-to-day worries about the technology—and we’re going to get to all the opportunities and all the benefits and all of that in just a moment.
Start with the fear.
Well, I think the fear tells us more, in a way, about the technology because it’s fun to think about. As far back as storytelling, we’ve talked about technologies that have run amok. And it seems to be this thing, that whenever we build something, we worry about it. Like, they put electricity in the White House, but then the president would never touch it and wouldn’t let his family touch it. When they put radios in cars, they said, “Oh, distracted driving, people are going to crash all the time.”
Airbags are going to kill you.
Right. Frankenstein, right? The word ‘robot’ comes to us from a Czech play.
You just hit a part of the psyche that I think people are letting in, too, when you said Frankenstein. It’s personification that often is the dangerous thing.
Think of people who dance with poisonous snakes. Sometimes it’s done as a dare, but sometimes it’s done because there’s a personification put on the animal that gives it greater importance than what it actually is, and that can be quite dangerous. I think we risk that here, too, just putting too much personification, human tendencies, on the technology.
For instance, there is actually a group of people who are advocating rights for industrial robots today, as if they are human, when they are not. They are very much just industrial machines. That kind of psyche is what I think some people are trying to inoculate now, because it walks us down this path where you’re thinking you can’t turn that thing off, because it’s given this personification of sentience before it has actually achieved it.
It’s been given this notion of rights before it actually has them. And the judgment of, even if it’s dangerous and we should hit the kill switch, there are going to be people reacting against that, saying, “You can’t kill this thing off”—even though it is quite dangerous to the species. That, to me, is a very interesting thing because a lot of people are looking at it as if, if it becomes intelligent, it will be a human intelligence.
I think that’s what a lot of the big thinkers think about, too. They think this thing is not going to be human intelligence, at which point you have to make a species-level judgment on its rights, and its ability to be sentient and put out there.
Let’s go back to the beginning of that conversation with ELIZA and Weizenbaum.
This man in the ‘60s, Weizenbaum, made this program called ELIZA, and it was a really simple chatbot. You would say, “I am having a bad day.” And it says, “Why are you having a bad day?” And then, you would say, “I’m having a bad day because of my mom.” “What did your mom do to make you have a bad day?” That’s it, very simple.
But Weizenbaum saw that people were pouring their heart out to it, even knowing that it was a machine. And he turned on it. He was like, “This is terrible.” He said, “When a machine says, ‘I understand,’ the machine is telling a lie. There is no ‘I’ there. There is nothing that understands anything.”
Is your comment about personification a neutral one? To say, “I am observing this,” or are you saying personification is a bad thing or a good thing? If you notice, Alexa got a name, Siri got a name, Cortana got a name, but Google Assistant didn’t get a name.
Start there—what are your thoughts on personification in terms of good, bad, or we don’t know yet?
In the way I was just talking about it, personification, I do think is a bad thing, and I do see it happening. In the way you just talked about it, it becomes a design tool. And as a design tool, it’s very useful. I name all my cars, but that’s the end of the personification.
You were using it to say they actually impute human characteristics on these beyond just the name?
Yes, when someone is fighting for the human rights or the labor rights of an industrial machine, they have put a deep personification on that machine. They’re feeling empathy for it, and they’re feeling it should be defended. They’re seeing it as another human or as an animal; they’re not seeing it as an industrial machine. That’s weird, and dangerous.
But, you as a designer, think, “Oh, no, it’s good to name Alexa, but I don’t want people to start thinking of Alexa as a thing.”
Yeah.
But you’re a part of that then, right?
Yeah, we are.
You’re naming it and putting a face on it.
You’ve circled right back to what I said—Skynet is going to come from product design and marketing.
From you.
Well, I did not name Alexa.
And just for the record, we’re not impugning Alexa here.
Yeah, we are not. I love Alexa. I have it, and like I said, I tell her every morning.
But, personification is this design tool, and how far is it fair for us to lean into it to make it convenient? In the same way that people name their favorite outfit, or their cars, or give their house a name—just as a convenience in their own mind—versus actually believing this thing is human and feeling empathy for it.
When I call out to Alexa in the morning, I don’t feel empathy for Alexa. I do wonder if my six-year-old son feels empathy for Alexa, and if by having that stuff in the homes—
—Do you know the story about the Japanese kids in the mall and the robot?
No.
There was this robot that was put in this Japanese mall. They were basically just trying to figure out how to make sure that the robot can get around people. The robot was programmed to ask politely for you to step aside, and if you didn’t, it would go around you.
And some kids started stepping in front of it when it tried to go around them. And then, they started bullying it, calling it names, hitting it with things. The programmers had to re-circle and say, “We need to rewrite the program so that, if there are small people, kids, and there’s more than a few, and there’s not big people around; we’ve got to program the robot to run away towards an adult.” And so, they do this.
Now, you might say, “Well, that’s just kids being kids.” But here’s the interesting thing: When they later took those kids and asked them, “Did you feel that the robot was human-like or machine-like?” Eighty-percent said it was human-like. And then, they said, “Do you feel like you caused it distress?” Seventy-five percent of them said yes. And so, these kids were willing to do that even though they regarded it as human-like and capable of feeling emotion.
They treated it like another kid.
Right. So, what do you read in the tea leaves of that story?
Well, more of the same, I’m afraid, in that we’re raising a generation—funny enough, Japan really did start this—where there needs to be familiarity with robotics. And it’s hard to separate robotics and AI, by the way. Robotics seems like the corpus of AI, and so much of what I think the public’s imagination that’s placed on AI is robotics, and has nothing to do with AI.
That is a fascinating thing to break apart, and they are starting to converge now, but back when they were doing that research, and the research like Wendy Ju does with the trash can on the public square going around, and it’s just a trashcan on wheels but it actually evokes very emotional responses from people. People personify it almost immediately even though it’s a trash can. One of the things the kids do in this case is they try and attract it with trash and say, “Come over here, come over here,” because they view it as this dog that eats trash, and they think that they can play with it. Empathy also arrives as well. Altruism arrives. There’s a great scene where this trash can falls over and a whole bunch of people go, “Aww…” and they run over and pick it up.
We’ve got to find a way to reset our natural tendencies. Technology has been our servant for all this time, and this dumb servant. And although we’re aware of it having positive and negative consequences, we’ve always thought of it as improving our experience, and we may need to adjust our thinking. The social medias might be doing that with the younger generations, because they are now seeing the great social harm that can come, and it’s like, do they put that on each other or do they put it on the platform?
But, I think some people who are very smart are painting with these broad brushes, and they’re talking about the one-hundred-year danger or the danger five years out, just because they’re struggling with how we change the way we think about technology as a companion. Because it’s getting cheaper, it’s getting more capable, and it’s invading the area of intelligence.
I remember reading about a film—I think this was in the ‘40s or ‘50s—and they just showed these college kids circles that would bounce or roll around together, or a line would come in. And they said, “What’s going on in these?” And they would personify those, they’d say, “Oh, that circle and that circle like each other.”
It’s like, if we have a tendency to do that to a circle in a film, you can only imagine that, when these robots can read your face, read your emotions—and I’m not even talking about a general intelligence—I mean something that, you know, is robotic and can read your face and it can laugh at your jokes and what not. It’s hard to see how people will be able to keep their emotions from being wrapped up in it.
Yeah, and not be tempted to explore those areas and put them into the body of capability and intelligence.
I was just reading two days ago—and I’m so bad at attribution—but a clever researcher, I think, at MIT created this program for scanning people’s social profile and looking at their profile photo… And after enough learning, building their little neural net where it’d just look at a photograph and guess whether this person was gay or not, their sexual preference, and they nail it pretty well.
I’m like, “Great, we’re teaching AI to be as shallow and presumptive as other humans, who would just make a snap judgment based on what you look like, and maybe it’s even better than us at doing it.”
I really think we need to develop machine ethics, and human ethics, and not be teaching the machine the human ethics, even if that’s a feature on the other side. And that’s more important than privacy.
Slow that down a second. When you do develop a difference between human ethics and machine ethics, I understand that; and then, don’t teach the machine human ethics. What does that mean?
We don’t need more capable, faster human ethics out of there. It could be quite damaging.
How did you see that coming about?
Like I said, it comes about through, “I’m going to create a recommendation engine.”
No, I’m sorry—the solution coming about.
Yeah.
Separating machine and human ethics.
We have this jokey thought experiment called “Death by 4.7 Stars”, where you would assume that there is a Skynet that has come to intelligence, and it has invaded recommendation engines. And when you ask it, “What should I have for lunch?”, it suggests that you have this big fatty hamburger, a pack of Lucky Strikes, and a big can of caffeinated soda.
At this point, you die of a heart attack younger. Just by handing out this horrible advice, and you trusting it implicitly, and it not caring that it’s lying to you, you just extinguish all of humanity. And then Skynet is sitting there going, “That was easy. I thought we were going to have a war between humans and machines and have to build the Matrix. Well, we didn’t have to do that.” Then, one of the AIs will be like, “Well, we did have to tell that lady to turn left on her GPS into a quarry.” And then, the AI is like, “Well, technically, that wasn’t that hard. This was a very easy war.”
So, that’s why we need to figure out this way to put a machine ethic in there. I know it seems old-fashioned. I’m a big fan of Isaac Asimov. I think he did some really good work here, and there’s other groups that are now advancing that and saying, “How can we put a structure in place where we just don’t give these robots a code of ethic?”
And then, the way you actually build these systems is important, too. AI should always come to the right conclusion. You should not then tell it, “No, come to this conclusion.” You should just screen out conclusions. You should just put a control layer in that filters out the conclusions you don’t want for your business purposes, but don’t build a feedback loop back into the machine that says, “Hey, I need you to think like my business,” because your business might need a certain amount of misdirection and non-truths to it.
And you don’t, maybe, understand the consequences because there’s a certain human filter between that stuff—what we call ‘white lies’ and such—that allows us to work. Whereas, if you amplify it times the million circuits and the probabilities that go down to the hundreds of thousands of links, you don’t really know what the race condition is going to produce with that small amount of mistruth.
And then, good governance and controls that say that little adjusted algorithm, which is very hard to ferret out—almost like the scene from Tron where they’re picking out the little golden strands—doesn’t move into other things.
And so, this is the kind of carefulness that we need to put into it as we deploy it, if we’re going to be careful as these magic features come along. And we want the features. There’s a whole digital lifestyle predicated on the ability for AI to establish context, that’s going to be really luxurious and awesome; and that’s one reason why I even approach things like the singularity, or “only you can prevent Skynet,” or even get preachy about it at all—because I want this stuff.
I just got back from Burning Man, and you know, Kathryn Myronuk says it’s a dress rehearsal for a post-scarcity society. What’s going to give us post-scarcity is artificial intelligence. For a large part, the ability to stand up machines enough to supply our needs, wants, and desires, and to sweep away the lower levels of Maslow’s hierarchy of need.
And then we can live in just a much more awesome society. Even before that, there’s just a whole bunch of cool features coming down the pipeline. So, I think that’s why it’s important to have this discussion now, so we can set it up in a way that it continues to be productive, trustful, and it doesn’t put the entire species in danger somehow, if we’re to believe Stephen Hawking or Elon Musk.
Another area that people are concerned about, obviously, are jobs—automation of jobs. There are three narratives, just to set them up for the listener:
The first is that AI is going to take a certain class of jobs that are ‘low-skill’ jobs, and that the people who have those jobs will be unemployed and there’ll be evermore of them competing for ever fewer low-skill jobs, and we’ll have a permanent Great Depression.
There’s a second area that says, “Oh, no, you don’t understand, everybody’s job—your job, my job, the President’s job, the speechwriter’s job, the artist’s job, everybody—because once the machines can learn something new faster than we can, it’s game over.”
And then, there’s a third narrative that says both of these are wrong. Every time we have a new technology, no matter how disruptive it is to human activity—like electricity or engines or anything like that—people just take that technology and they use it to magnify their own productivity. And they raise their wages and everybody uses the technology to become more productive, and that’s the story of the last two hundred and fifty years.
Which of those three scenarios, or a fourth one, do you identify with?
A fourth one, where the burden of productivity being the guide of work is released, or lessened, or slackened. And then, the people’s jobs who are at the most danger are the people who hate their jobs. Their jobs are at the most danger. Those are the ones that AI is going to take over first and fastest.
Why is that not my first setup, which is there are some jobs that it’s going to take over, putting those people out of work?
Because there will be one guy who really loves driving people around in his car and is very passionate about it, and he’ll still drive his car and we’ll still [get] into it. We’ll call the human car. He won’t be forced out of his job because he likes it. But the other hundred guys who hated driving a car for a living, their job will be gone because they weren’t passionate enough to protect it or find a new way to do it or enjoy doing it anymore. That’s the slight difference, I think, between what I said and what you said.
You say those hundred people won’t use the technology to find new employment?
I think an entire economy of a different kind of employment that works around passion will ultimately evolve. I’m not going to put a timescale on this, but let’s take the example of “ecopoesis,” which I’m a big fan of, which comes out of Stanley K. Robinson’s Mars. But probably before that was one of the first times I encountered it.
Ecopoesis is a combination of ecology poet – ecopoesis. If you practice it, you’re an ecopoet. This is how it would work in the real world, right? We would take Bill Gates’s proposal, and we would tax robots. Then we would take that money, and we would place an ad on Craigslist, and say, “I would need approximately sixty thousand people who I can pay $60,000 a year to go into the Lincoln National Forest, and we want you to garden the thing. We want you to remove the right amount of deadfall. We want you to remove evasive species. We want you to create glades. We want for the elk to reproduce. We want you to do this on the millions of hectares that is the Lincoln National Forest. In the end, we want it to look like Muir Woods. We want it to be just the most gorgeous piece of garden property possible.”
How many people who are driving cars today or working as landscapers wouldn’t just look at that Craigslist ad and immediately apply for the opportunity to spend the next twenty years of their life gardening this one piece of forest, or this one piece of land, because they’re following their passion into it and all of society benefits from it, right? That’s just one example of what I mean.
I think you can begin a thought experiment where you can see whole new categories of jobs crop up, but also people who are so passionate in what they’re doing now that they simply don’t let the AI do it.
I was on a cooking show once. I live a weird life. While we were on it we were talking about robots taking jobs, just like you and I were. We were talking about what jobs will robots take. Robots could take the job of a chef. The sous chef walks out of the back and he says, “No, it won’t.” We’re like, “Oh, you’re with nerds discussing this. What do you mean, ‘No, it won’t’?” He’s like, “Because I’ll put a knife in its head, and I will keep cooking.”
That’s a guy who’s passionate about his job. He’s going to defend it against the robots and AI. People will follow that passion and see value in it and pursue it.
I think there’s a fourth one that’s somewhere between one and three, that is what comes out of this. Not that there won’t be short-term disruption or pain but, ultimately, I think what will happen is humanity will self-actualize here, and people will find jobs they want to do.
Just to kind of break it down more a bit, that sounds like WPA or the Depression.
Yeah.
It says, “Let’s have people paint murals, build bridges, plant saplings.”
There was a lot of that that went on, yeah.
And so, you advocate for that?
I think that that is a great bridge when we’re in that point between post-singularity—or an abundance society, post-scarcity—and we’re at this in-between point. Even before that, in the very near-term, a lot of jobs are going to be created by the deployment of AI. It actually just takes a whole lot of work to deploy and it doesn’t necessarily reverberate into removing a bunch of jobs. Often, it’s a very minute amount of productivity it adds to a job, and it has an amplifying effect.
The industry of QA is going to explode. Radiologists, their jobs are not going to be stolen; they’re going to be shifted to the activity of QA to make sure that this stuff is identifying correctly in the short term. Over the next twenty to fifty years, there’s going to be a whole lot of that going on. And then, there’s going to be just a whole lot of robotics fleet maintenance and such, that’s going to be going on. And some people are going to enjoy doing this work and they’ll gravitate to it.
And then, we’re going to go through this transition where, ultimately, when the robots start taking care of something really lower-level, people are going to follow their passions into higher-level, more interesting work.
You would pay for this by taxing the robots?
Well, that was Bill Gates’s idea, and I think there’s a point in history where that will function. But ultimately, the optimistic concept is that this revolution will bring about so much abundance that the way an economy works itself will change quite a bit. Thus, you pay for it out of just doing it.
If we get to the point where I can stick out my hand, and a drone drops a hammer when I need a hammer to build something, how do you pay for that transaction? If that’s backed with a Tokamak Reactor—we’ve created fusion and energy is superfluous—how do you pay for that? It’s such a miniscule thing that there just might not be a way to pay for it, that paying for things will just completely change altogether.
You are a designer.
I’m a product designer, yes. That’s what I do by trade.
So, how do you take all of that? And how does that affect your job today, or tomorrow, or what you’re doing now? What are the kinds of projects you’re doing now that you have to apply all of this to?
This is how young it actually is. I am currently just involved in what does the tooling look like to actually deploy this at any kind of scale. And when I say “deploy,” I don’t mean sentience or anything close to it; but just something that can identify typos better than the current spellcheck system. Or identify typos in a very narrow sphere of jargon that other people know. Those are the problems being worked on right now. We’re scraping pennies outside of dollars, and it just needs a whole lot of tooling on that right now.
And so, the way I get to apply this, quite fundamentally, is to help influence what are the controls, governance, and transparency going to look like, at least in the narrow sphere where I’m working with people. After that, it’s all futurism, my friend.
But, on a day-to-day basis at argo, where do you see designing for this AI world? Is it all just down to the tooling area?
No, that’s just one that’s very tactical. We are actually doing that, and so it’s absorbing a lot of my day.
We have had a few clients come in and be like, “How do I integrate AI?” And you can find out it’s a very ticklish problem of like, “Is your business model ready for it? Is your data stream ready for it? Do you have the costing ability to put it all together?” It’s very easy to sit back and imagine the possibilities. But, when you get down to the brass tacks of integration and implementation, you start realizing it needs more people here to work on it.
Other than putting out visions that might influence the future, and perhaps enter into the zeitgeist our opinion on how this could transpire, we’re really down in the weeds on it, to be honest.
In terms of far out, you’ve referred to the singularity a number of times, do you believe in Kurzweil’s vision of the singularity?
I actually have something that I call “the other singularity”. It’s not as antagonistic as it sounds. It’s meant like the other cousin, right? While the singularity is happening—his grand vision, which is very lofty—there’s this other singularity going on. This one of cast-offs of the exponential technology curve. So, as computational power gets less expensive, yesterday’s computer—the quadcore computer that I first had for $3,000—is now like a $40 gum stick, and pretty soon it’s going to be a forty-cent MCU computer on a chip.
At that point, you can apply computational power to really mundane and ordinary things. We’re seeing that happen at a huge pace.
There’s something I like to call the “single-function computer” and the new sub-$1000. In the ‘90s, when computers were out there… They were out there for, really, forty, fifty years before mass adoption hit. From a marketing perspective, it was said that, until a price comes below $1,000 for a multifunction computer, they won’t reach adoption. Soon as it did, they spread widely.
We still buy these sub-$1000 computers. Some of us buy slightly more in order to get an Apple on the front of them and stuff, but the next sub-$1000 is how to get a hundred computers in the home for under $1,000 and that’s being worked on now.
What they’re going to do is take the function of these single-function computers, which take a massive amount of computational power, and dedicate them to one thing. The Nest would be my first example that people are most familiar with. It has the same processing power as the original MacBook G4 laptop, and all that processing power is just put to algorithmically keeping your home comfortable in a very exquisite out-of-the-box experience.
We’re seeing more and more of these experiences erupt. But they’re not happening in this elegant, singularity, intelligence-fed path. They just do what they do procedurally, or with a small amount of intelligence, and they do it extremely well. And it’s this big messy mess, and it’s entirely possible that we reach a form of the singularity without sentient artificial intelligence guiding it.
An author that I really love that works in this space a lot is Cory Doctorow. He has a lot of books that kind of propose this vision where machines are somehow taking care of this lower level of Maslow’s hierarchy of needs, and creating a post-scarcity society, but they are not artificial intelligence. They have no sentience. They’re just very, very capable at what they do, and there’s a profundity of them to do a lot of things.
That’s the other singularity, and that’s quite possibly how it may happen, especially if we decide that sentience is so dangerous [that] we don’t need it. But I find it really encouraging and optimistic, that there is this path to the future that does not quite require it, but could still give us a lot of what we see in these singularity-type visions of the future—the kind of abundance, and ability to not be toiling each day for survival. I love that.
I think Kurzweil thinks that the singularity comes about because of emergence.
Yeah.
Because, at some point, you just bolt enough of this stuff together and it starts glowing with some emergent behavior, that it is at a conscious decision that we decide, “Let’s build.”
Yeah, the exponential technology curve predicts the point at which a computer can have the same number of computations as we have neurons, right? At which point, I agree with you, it kind of implies that sentience will just burst forth.
Well, that’s what he says.
Yeah.
That’s the question, isn’t it?
I don’t think it happens that way.
What do you think happens?
I don’t think sentience just bursts forth at that moment.
First of all, taking a step back, in what sense are you using the word ‘sentience’? Strictly speaking, it means ‘able to sense something, able to feel’—that’s it. Then, there’s ‘sapience’, which is intelligent. That’s what we are, homo sapiens. Then, there’s ‘consciousness’, which is the ability to have subjective experience—that tea you just drank tasted like something and you tasted it.
In what sense are you thinking of computers—not necessarily having to be that?
Closer to the latter. It’s something that is aware of itself and begins guiding its own priorities.
You think we are that. We have that, humans.
Yeah.
Where do you think it comes from? Do you think it’s an emergent property of our brains? Is it something we don’t know? Do you have an opinion on that?
I mean, I’m a spiritualist, so I think it derives from the resonance of the universe that was placed there for a reason.
In that view of the world, you can’t manufacture that, in other words. It can’t come out of the factory and someplace.
To be metaphysical, yes. Like Orson Scott Card, will the philotics plug into the machine, and suddenly it wakes up and it has the same cognitive powers as a human? Yeah, I don’t know.
What you do, which is very interesting, is you say, “What if that assumption—that one assumption—that someday the machine kind of opens its eyes; what if that one assumption isn’t true?” Then what does the world look like, of ever-better computers that just do their thing, and don’t have an ulterior motive?
Yeah, and the truth is they could also happen in parallel. Both could be happening at the same time, as they are today, and still progress. But I think it’s really fascinating. I think some people guard themselves. They say, “If this doesn’t happen, there’s nothing smart enough to make all the decisions to improve humanity, and we’re still going to have to toil away and make them.” And I say, “No, it might be entirely possible that there’s this path where just these little machines, and profundity do it for us and sentience is not necessary.”
It also opens up the possibility that, if sentience does just pop into existence right now, it makes very fair the debate that you could just turn it off, that you could commit the genocide of the machine and say, “We don’t want you or need you. We’re going to take this other path.”
We Skynet them.
We Skynet them, and we keep our autonomy and we don’t worry about the perils. I think part of the fear about this kind of awareness—we’ve been calling it sentience—kind of theory on AI, is this fear that we just become dependent on them, and subservient to them, and that’s the only path. But I don’t think it is.
I think there’s another path where technology takes us to a place of great capability so profound that it even could remove the base layer of Maslow’s hierarchy of needs. I think of books like Makers by Cory Doctorow and others that are forty years in the future, and you start thinking of micro-manufacturing.
We just put up this vision on Amazon and Whole Food, which was another nod towards this way of thinking. That ignoring the energy source a little bit—because we think it’s going to sort itself out, everyone has solar on their hands or Tokamak—if you can get these hydroponic gardens into everyone’s garage, produce is just going to be so universally available. It goes back to being the cheapest of staples. Robots could reduce spoilage by matching demand, and this would be a great place for AI to live.
AI is really good at examining this notion of like, “I think you’re going to use those Brussels sprouts, or I think your neighbor is going to use them first.” We envision this fridge that has a door on the outside, which really solves a lot of delivery problems. You don’t need those goofy cardboard boxes with foil and ice in them anymore. You just put it in the fridge. It also can move the point of purchase all the way into the home.
When you combine that with the notion of this dumber AI that’s just sitting there, deciding whether you or the neighbor needs Brussels sprouts, it can put the Brussels sprouts there opportunistically, thinking, “Maybe he’ll get healthy this week.” When I don’t take them before they spoil, it can move them over to the neighbor’s fridge where they use [them]. You just root so much spoilage out of the system, that nutrition just raises and it becomes more ubiquitous.
Now, if people wanted to harvest those goods or tend those gardens, they could. But, if people didn’t, robots could make up the gap. Next thing you know, you have a food system that’s decoupled from the modern manufacturing system, and is scalable and can grow with humanity in a very fascinating way.
Do you think we’re already dependent on the machine? Like, if an EMP wave just fried all of our electronics, a sizeable part of the population dies?
I think that’s very likely. Ignoring all the disaster and such right then, it would take a whole lot of… I don’t necessarily think that’s purely a technological judgment. It’s just the slowness of humanity to change their priorities. In other words, we would realize too late that we all needed to rededicate our resources to a certain kind of agriculture, for instance, before the echo moved through the machine. That would be my fear on it—that we all engrain our habits and we’re too slow to change them.
Way to kill off humanity three times in this podcast!
That’s right.
Does that happen in most of these that you are doing?
No.
Oh, great! It’s just my dark view.
It’s really hard to kill us off, isn’t it?
Yeah.
Because, if it was going to happen, it seems like it would have happened before when we had no technology. You know, there were just three million of us five-thousand years ago. By some counts, thousands of us, at one time, and wooly mammoths running around.
But back then, ninety-nine percent of our technology was dedicated to survival, and it’s a way lower percentage now. In fact, we invented a percentage of technology that is dedicated to our destruction. And so, I don’t know how much the odds have changed. I think it’s a really fascinating discussion—probably something that AI can determine for us.
Well, I don’t know the percentage. It would be the gross amount, right?
Yeah.
Because you could say the percentage of money we’re spending on food is way down, but that doesn’t mean we’re eating less. The percentage of money we’re spending on survival may be way down, but that doesn’t mean we’re spending less.
Yeah.
In a really real-world kind of way, there’s a European initiative that says: When an AI makes a decision that affects you, you have a right to know why it made that decision. What do you think of that? I won’t impute anything. What do you think of that?
Yeah, I think Europe is ahead of us here. The funny thing is a lot of that decision was reported as rights for AI, or rights for robots. But when you really dig into it, it’s rights for humans. And they’re good rights.
If I were to show you designs out of my presentations right now, I have this big design that’s… You’re just searching for a car and it says, “Can I use your data to recommend a car?” and you click on that button and say yes. That’s the way it should be designed. We have taken so many liberties with people’s data and privacy up until now, and we need to start including them in on the decision.
And then, at the bottom of it, it has a slider that says, “The car you want, the car your wife wants.” You should also have transparency and control of the process, right? Because machine learning and artificial intelligence produces results with this kind of context, and you should be allowed to change the context.
First of all, it’s going to make for a better experience because, if it’s looking at all my data historically, and it’s recommended to me the kind of sleeping bag I should buy, it might need to be aware—and I might have to make it aware—that I’m moving to Alaska next week, because it would make a different recommendation. This kind of transparency in government actually… And I also think they put in another curious thing—and we’ll see how it plays out through the court—but I believe they also said that, if you get hurt by it—this was the robotic side—the person who made the robot is responsible for it.
Some human along the way made a decision that hurt you is the thesis.
Yes, or the business corpus that put this robot out there is responsible for it. It’s the closest thing to the three laws of robotics or something put into law that we’ve seen yet. It’s very advanced thinking, and I like it; and it’s already in our design practice.
We’re already trying to convince clients that this is the way to begin designing experiences. More than that, we’re trying to convince our fellow designers, because we have a certain role in this, that we can utilize to design the experiences so that they are open and transparent to the person using them. That little LED green lights says, “AI is involved in this decision,” so you might judge that differently.
But where does that end? Or does that inherently limit the advancement of the technology? Because you could say, “I rank number two in Google for some search—some business-related search—and somebody else ranks number one.” I could go to Google and say, “Why do I rank number two and they rank number one?” Google could, in all fairness, say, “We don’t know.”
Yeah, that’s a problem.
And so, do you say, “No, you have to know. You’ve got to limit the technology until you can answer that question,” or do you just say, “We don’t know how people make decisions.” You can’t ask the girl why she didn’t go out with you. “Why aren’t you going out with me?” That affects me. It’s like, “I’m just not going to.”
You’ve framed the consumer’s dilemma in everything from organic apples to search results, and it’s going to be a push-and-pull.
But I would say, yeah, if you’re using artificial intelligence, you should know a little bit about how it’s being produced, and I think there’ll be a market for it. There’s going to be a value judgment on the other side. I really think that some of the ways we’re looking at designing experiences, it’s much more valuable to the user to see a lot of these things and know it—to be able to adjust the rankings based on the context that they’re in, and they’re going to prefer that experience.
I think, eventually, it’ll all catch up in the end.
One last story, I used to sell snowboards. So much of this is used for commerce. It’s an easy example for us to understand, retail. I used to sell snowboards, and I got really good at it. My intelligence on it got really focused. I was at a pretty good hit rate. Someone could walk in the door, and if I wrote down what snowboard they were going to buy, I was probably right eighty-five to ninety-percent of the time. I got really good at it. By the end of the season, you just know.
But, if I walked up to any of those people and said, “Here’s your snowboard,” I would never make a sale. I would never make a sale. It creeps them out, they walk away, the deal is not closed. There’s a certain amount of window dressing, song and dance, gathering of information to make someone comfortable before they will make that decision to accept the value.
Up until now, technology has been very prescriptive. You write the code, it does what the code says. But that’s going to change, because the probabilities and the context-gathering goes away. But to be successful, there is still going to have to be that path, and it’s the perfect place to put in what we were just talking about—the transparency, the governance, and the guidance to the consumer to let them know that they’re [in] on that type of experience. Why? You’re going to sell more snowboards if you do.
In your view of a world where we don’t have this kind of conscious AGI, we’re one notch below that, will those machines still pass the Turing test? Will you still be able to converse with them and not know that it’s a computer you’re talking to?
I think it’ll get darn close, if not all the way there. I don’t think you could converse with them as much as people imagine though.
Fair enough. I’m going to ask you a privacy question. Right now, privacy is largely buried on just the sheer amount of data. Nothing can listen to every phone conversation. Nothing can do that. But, once a machine can listen to them all, then it can.
Then, we can hear them all right now, but we can’t listen to them all.
Correct. And I read that you can now get human-level lip-reading from cameras, and you get facial recognition.
Yeah.
And so you could understand that, eventually, that’s just a giant data mining problem. And it isn’t even a nefarious one, because it’s the same technology that recommends what you should buy someplace.
Yeah.
Tell me what you think about privacy in a world where all of that information is recorded and, I’m going to use ‘understood’ loosely, but able to be queried.
Yeah, this is the, “I don’t want a machine knowing what I had for lunch,” question. The machine doesn’t care; people care. What we have to do is work to develop a society where privacy is a virtue, not a right. When privacy is a right, you have to maintain it through security. The security is just too fallible, especially given the modern era.
Now, there’ll always be that certain kind of thing, but privacy-as-a-virtue is different. If you could structure society where privacy is a virtue, well, then it’s okay that I know what you have on lunch. It’s virtuous for me to pretend like I don’t know what you had for lunch, to not act on what I know you had for lunch, and not allow it to influence my behavior.
It sounds almost Victorian and I think there is a reason that, in the cyberpunk movement in science fiction, you see this steampunk kind of Victorian return. In the Victorian era, we had a lot of etiquette based on just the size of society. And the new movement of information meant that you knew a lot about people’s business that you didn’t know anymore. And the way we dealt with it was this kind of really pent-up morality where it was virtuous to pretend like you didn’t know—almost to make it as a game and not allow it to influence your decision-making. Only priests do this anymore.
But we’re all going to have to pick up the skill and train our children, and I think they’re training themselves to do it, frankly, right now, because of the impacts of social media on their lives. We might return to this second Victorian era, where I know everything about you but it’s virtuous.
Now, that needs to bleed into the software and the hardware architectures as well. Hard drives need to forget. Code algorithms need to forget, or they need to decide what information they treat as virtuous. This way, we can have our cake and eat it, too. Otherwise, we’re just going to be in this weird security battle forever, and it’s not going to function. The only people who are going to win in that one are the government. We’re just going to have to take it back in this manner.
Now, you can just see how much optimism bleeds through me when I say it this way, and I’m not totally incognizant of my optimism here, but I really think that’s the key to this. Any time we’re faced with the feature, we just give up our privacy for it. And so, we may as well start designing the world that can operate with less privacy-as-a-right.
It’s funny, because I always hear this canard that young people don’t care about privacy, but that’s not my experience. I have four kids. My oldest son always comes in and says, “How can you use that? It’s listening to everything you’re doing.” Or, “How do you have these settings on your computer the way you do?” I’m like, “Yeah, yeah, well…” But you say, not only do they value it more, but they’re learning etiquette around it as well.
Yeah, they’re redefining it.
They see what their friends did last night on social media, but they’re not going to mention it when they see them.
That’s right, and they’re going to monitor their own behavior. They just have to in order to function socially. We as creatures need this. I think we grew up in a more unique place. It’s goofy, but I lived in 1867. You had very little privacy in 1867.
That’s right. You did that PBS thing.
Yeah, I did that PBS thing, that living history experiment. Even though it’s fourteen people, the impacts of a secret or something slipping out could be just massive, but everyone has that impact. There was an anonymity that came from the Industrial Revolution that we, as Gen Xers, probably enjoy the zenith of, and we’ve watched social media pull it back apart.
But I don’t think it’s a new thing to humanity, and I think ancestral memory will come back, and I think we will survive it just fine.
In forty-something guests, you’ve referred to science fiction way more than even the science fiction writers I have on the show.
I’m a fanboy.
Tell me what you think is really thoughtful. I think Frank Herbert said, “Sometimes, the purpose of science fiction is to keep the future from happening.”
Yes.
Tell me some examples. I’m going to put you on the spot here.
I just heard that from Cory Doctorow two weeks ago, that same thing.
Really? I heard it because I used to really be annoyed by dystopian movies, because I don’t believe in them, and yet I’m required to see them because everybody asks me about them. “Oh, my gosh, did you see Elysium?” and I’m like, “Yes, I saw Elysium.” And so, I have to go see these and they used to really annoy me.
And then, I saw that quote a couple of years ago and it really changed me, because now I can go to them and say, “Ah, that’s not going to happen.”
Anyway, two questions: Are there any futures that you have seen in science fiction that you think will happen? Like, when you look at it, you say, “That looks likely to me,” because it sounds like you’re a Gene Roddenberry futurist.
I’m more of a Cory Doctorow futurist.
And then, are there ones you have seen that you think could happen, but you don’t think it’s going to happen, but it could?
I’m still on the first question. In my recent readings, the whole Stanley K. Robinson and Cory Doctorow works are very good.
Now, let’s talk about Iain M. Banks, the whole Culture series, which is so far-future, and so grand in scale, and so driven by AI that knows it’s superior to humans—but is fascinated with them. Therefore, it doesn’t want to destroy them but rather to attach themselves to society. I don’t think that is going to happen but it could happen. It’s really fascinating.
It’s one of those bigger-than-the-galaxy type universes where you have megaships that are mega-AIs, and can do the calculations of a trillion humans in one second, and they keep humans around for two reasons… And this is how they think about it: One, they like them, they’re fascinating and curious; and two, there are thirteen of them that, by sheer random number, they’re always right. Therefore, they need a certain density of humanity just so they can consult them when they can’t come up with an answer of enough certainty.
So, there are thirteen humans that are always right.
Yeah, because there are so many trillions and trillions of them. And the frustrating thing to these AI ships is, they can’t figure out why they’re always right, and no one has decided which theory is correct. But the predominant leading theory is that they’re just making random decisions because there are so many humans, these thirteen random decisions happen to always be correct. And the humans themselves, we get a little profile of one of them and she’s rather depressed, because we can’t be fatalists as a species.
Jared, that is a wonderful place to leave this. I want to thank you for a fascinating hour. We have covered, I think, more ground than any other talk I’ve had, and I thank you for your time!
Thank you! It was fun!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } from Gigaom https://gigaom.com/2017/10/02/voices-in-ai-episode-7-a-conversation-with-jared-ficklin/
0 notes