Tumgik
#like i quite like the word ''keynote'' as mentioned. it's got a lot of fun sounds in it.
inkskinned · 7 months
Text
i love when words fit right. seize was always supposed to be that word, and so was jester. tuesday isn't quite right but thursday should be thursday, that's a good word for it. daisy has the perfect shape to it, almost like you're laughing when you say it; and tulip is correct most of the time. while keynote is fun to say, it's super wrong - i think they have to change the label for that one. but fox is spot-on.
most words are just, like, good enough, even if what they are describing is lovely. the night sky is a fine term for it but it isn't perfect the way november is the correct term for that month.
it's not just in english because in spanish the phrase eso si que es is correct, it should be that. sometimes other languages are also better than the english words, like how blue is sloped too far downwards but azul is perfect and hangs in the air like glitter. while butterfly is sweet, i think probably papillion is more correct, although for some butterflies féileacán is much better. year is fine but bliain is better. sometimes multiple languages got it right though, like how jueves and Πέμπτη are also the right names for thursday. maybe we as a species are just really good at naming thursdays.
and if we were really bored and had a moment and a picnic to split we could all sit down for a moment and sort out all the words that exist and find all the perfect words in every language. i would show you that while i like the word tree (it makes you smile to say it), i think arbor is correct. you could teach me from your language what words fit the right way, and that would be very exciting (exciting is not correct, it's just fine).
i think probably this is what was happening at the tower of babel, before the languages all got shifted across the world and smudged by the hand of god. by the way, hand isn't quite right, but i do like that the word god is only 3 letters, and that it is shaped like it is reflecting into itself, and that it kind of makes your mouth move into an echoing chapel when you cluck it. but the word god could also fit really well with a coathanger, and i can't explain that. i think donut has (weirdly) the same shape as a toothbrush, but we really got bagel right and i am really grateful for that.
grateful is close, but not like thunder. hopefully one day i am going to figure out how to shape the way i love my friends into a little ceramic (ceramic is very good, almost perfect) pot and when they hold it they can feel the weight of my care for them. they can put a plant in there. maybe a daisy.
12K notes · View notes
queerchoicesblog · 4 years
Text
The Gala (OH, Harper x F!MC)
Tumblr media
Nobody asked for a sequel of Unexpected News (& Misunderstandings as well) but I wrote it anyway! As my confidence in a good Book 2 for Open heart decreases every day, I found myself missing my non-canon slowburn pairing and voilà: the night of the gala brings some good news and an unexpected -and long waited- turn of events for Dr. Emery and Dr. Valentine. Special thanks to Kyra :D
Perma Tag: @brightpinkpeppercorn @bhavf @melodyofgraves @abunchofbadchoices @silverhawkenzie @strangerofbraidwood @kamilahmykween @desiree-0816 @universallypizzataco @gayestchoices @embarrassingsmartphonegame @lilyofchoices @somewillwin @allaboutchoices  
Harper x F!MC Tag: @andi-the-cat @korrasamixfan @delphinusbae @noeschoices @jellymonster
Word Count: 2250
Disclaimer: For previous chapters of the Harper x F!MC, check my masterlist (too lazy to post all the links her) + I mentioned Avery Wilshere, in this fic it will be a male Avery as I pictured it male in my Platinum playthrough, sorry for that!
__________________________
The Edenbrook Hospital Fundraising Gala was in full swing. Everything was surprisingly running smoothly: Dr. Naveen Banerji gave a gracious and inspiring keynote speech, acknowledging and praising the hard work of the medical staff and encouraging patrons to support research and the new pioneering programs in favor of the community launching soon. The gala guest Avery Wilshere took the stage and echoed his words, remembering how the doctors of Edenbrook went above and beyond to cure and eventually save the life of one of his dearest friends before charming the audience with his soft voice and ballads.
Dr. Valentine had never been to a gala before: she felt thrilled and awkward at the same time standing there, in a fancy hall, dressed to the nines eating canapes and sipping expensive wine. Shaking hands with embarrassingly rich people and trying to look her best professional self while enjoying the party. She took a moment to check in on her friends. A soft smile drew on her lips as she spotted Sienna and Danny slow dancing downtempo and lost in each other's arms on the dance floor. Not far a visibly smitten Phoebe was adjusting Elijah's bowtie, spreading a blush over Dr. Greene's cheeks. Seeing her friends happy and in the company of their loved ones comforted her, a content feeling taking in inside her. Then she noticed Bryce and Jackie bantering at the bar and shook her head laughing. She rejoiced again seeing the shade of pink on Aurora's face as she shyly nodded and squeezed closer to a smiling Raphael for a selfie. It was so endearing how the young Dr. Emery still struggled a little to come to terms with the genuine affection all of them showed her, to people around her age willing to include her, Aurora, without ulterior motives than to enjoy her company.
Now it was her turn. She took a quick nervous look to Kyra who just nodded and winked in encouragement. I suppose you don't exactly have a choice, self, she thought, sighing to steady herself and turn. She surfed through the crowd, flashing quick smiles to fellow doctors as she passed by...until she saw her. The breath caught in her throat as she froze in place: Harper was glowing in an elegant blue dress - blue navy was definitely Harper’s favorite color- and finely jeweled, crimson over her perfectly shaped lips. The Head of Neurosurgery was in the company a bunch of doctors and wealthy patrons: she looked perfectly at ease, chatting and sipping a glass of champagne. Dr. Valentine diverted her eyes, pondering her options.
Nah, maybe it wasn't the right time. It was silly, maybe she should leave and..
But it was too late: when she raised her eyes again, she met Harper’s, looking in her direction. She gave her a quick smile and a nod, that Meredith immediately mirrored still wondering if she should just keep walking and pretend she wasn’t just passing by. Dr. Emery anticipated her once again. The neurosurgeon graciously excused herself and parted from the group, heading straight towards the young fellow.
“Hello, Valentine. Enjoying the party?” she greeted her with a smile.
"Oh yes, it's..." Meredith's eyes wandered around the hall looking for the right words. "Amazing. And impressive. Dr. Banerjii and the board went above and beyond for this gala night"
"Indeed" Dr. Emery agreed, following Meredith's gaze before looking back at her. "I noticed that Naveen and Ethan showed you off to the VIP guests over here"
A light blush spread over Valentine's face as she minimized.
"They just introduced me, I wouldn't go that far...but it was kind of them, I guess"
Harper flashed a quick smile as if she was expecting that kind of answer.
"I'm glad they did. After all you've been through, you deserved a little victory lap, Valentine"
"Do you think so?" Meredith sighed.
"You know me: have I ever said anything I didn't mean?"
Meredith offered a weak smile and shook her head in response. Then she remembered her chat with Kyra and she felt conflicted again, but Harper was giving a look filled with curiosity as if she read her mind and knew, just knew that she was keeping something from her.
"Actually, Dr. Emery I was kind of...hoping to bump into you"
"Really? What's the matter?" Harper asked, shifting slightly to listen more carefully.
“This is…embarrassing, very silly I should probably-” Meredith started but froze mid.-sentence.
“I’m sure it’s not. So?” Harper inquired, taking a sip of her champagne.
“Well…” Meredith swallowed hard and fiddled with her own hands “I somehow got involved in a bet”
“A bet?” Harper echoed, flashing her an enigmatic and slightly amused smile. “What kind of bet, if I may? Please tell me it has nothing to do with scalpels because I don’t have one in my purse. Stupidly I forgot to bring them to the gala”
“Oh no” Valentine chuckled. “It’s…Kyra, the girl over there….she bets that I...well that I would never have the guts to ask you to dance”
“Oh” Dr. Emery raised a surprised eyebrow to her before cocking her head to look over Valentine’s shoulder.
On the other side of the room, a young woman wearing a colorful turban excitedly waved at her with a big smile.
“And what did she ask you to do if you failed?” Harper asked, waving back at Kyra.
“Running the upcoming Boston marathon on her behalf”
“Well, I was expecting something worse to be h-”
“Barefoot and wearing a t-shirt that says ‘too chicken to ask Dr. Emery to dance’”
Harper gave her a long look before both of them burst into laughter.
“I know it sounds ridiculous but it’s also tragic on my end” Meredith chuckled.
“It seems you have quite a lot to lose here, Valentine. Now the question is…what are you gonna do?”
“W-what do you mean?” Meredith asked suddenly less confident than before.
Harper gave her an amused smile and leaned a bit closer:
“You haven’t asked me to dance yet. If that’s what you wanted to do”
“Oh, yeah, right!” Meredith mumbled before recollecting herself.
The notes of one of Avery's most loved song started playing in the background as if on cue, eliciting a round of applause and cheer from the audience.
“Will you dance with me, Dr. Emery?”
Harper gave her one of her long looks before breaking into a smile and stopping a man in a dashing tuxedo passing by.
“Grant, dear, sorry. Would you mind holding that for me?” she asked, handing him her glass.
“Sure, Harper. Hitting the dance floor?” he asked winking at the two of them.
“I think so” she smiled back.
“Cool, just don’t ask for your drink later, Harper! Have fun, ladies!” he said with a huge grin before being approached by another patron.
Dr. Emery shook her head and turned towards Meredith, flashing a smile.
“Shall we?”
Meredith's knees threatened to give in as Harper gently grazed her arm, a light touch that made her heart flutter, but she managed to keep walking and mirror the neurosurgeon's smile.
Kyra later smiled to herself observing the two women dancing in the crowd, slowly letting themselves go and forget about the gala, being more confident around each other only to fall into an awkward silence as they swayed to the beat of  Avery's signature song, Lift Me Up
I tried to resist you I tried to keep my distance I tried to play it cool I'm no match for your persistence
I knew it, she beamed as a few songs later they moved to the bar arms in arms and spent the rest of the night chatting and laughing, Meredith gesticulating, too lost in some kind of deep conversation to be nervous, and Harper, listening carefully resting her chin in her hand.
They were still together when the gala came to an end, walking side by side into the parking lot among the multitude of doctors and guests exchanging parting greetings and a last round of chats.
"Who would have thought that a gala could be so much fun?" Harper commented, smiling to herself.
"Thanks for saving me from the deadly dull of patron conversation" she whispered conspiratorially, leaning closer to Meredith so that only she could hear.
"Anytime! But I should be the one thanking you: you saved me from the most disastrous and embarrassing Boston marathon ever" Meredith giggled.
"Yeah...I must say it would have been quite a sight. I can see the headlines-"
"No, it would have been awful, thank you!" Meredith interrupted her before laughing again. "You're an awful tease Dr. Emery, you know that right?"
Harper just shrugged and smiled.
"But I must admit I'm glad you decided not to run the marathon" she took a brief pause as if to ponder her words or to cherish the realization that just crossed her mind and made her smile to herself, almost shyly. "It's been ages since I had a night like this. Of course, it wasn't all fun, there were work duties too but...dancing, actually having a good carefree conversation with someone, that's rather unusual for me these days. Well, in a while, to be honest. I almost forgot how it was and I missed it. Thanks, Valentine"
The soft shift in her tone was so earnest that left Meredith a bit puzzled as a flashback of Mr. Linen Suit leaving Edenbrook with Harper. Maybe the date didn't work out?
"No need to thank me, it was my pleasure. I had a great time too, it was nice spending time with you outside work. Well, more or less as you said"
A light shade of pink colored her cheeks.
"You mean it?"
Meredith turned: seeing Harper surprised that someone might enjoy her company outside work, her personal self talking of ordinary stuff not the myth, the public persona was unexpected. And quite heartwarming as well. She barely refrained from the urge to reach out and squeeze her hand or pull her into a tight embrace.
"Of course I do" she said instead.
Harper smiled, a rather shy and grateful smile so different from her usual dignified ones at work, and nodded.
"I'm glad to hear. Maybe we should-" the neurosurgeon stopped mid-sentence, unsure whether to finish the sentence that slipped out of her mouth.
"Maybe we should...?"
Harper slowed down to a stop and after much internal debate, she shook her head and continued.
"Do you like Thai food?"
"Yes, sure but why?"
"I was thinking that maybe we should do it again, spending time together outside good old Edenbrook and I happen to know a lovely place downton. I'd love to take you out to dinner there"
"Yo-you would?" Meredith managed to stutter as she blushed furiously. "Like casually or-"
"Yes, I would" Harper laughed softly, amused of how flustered Valentine got in the turn of seconds. "And it's a date only if we both feel like that, no pressure. So what do you say?"
Dr. Valentine refrained herself from screaming the easiest yes she had ever formulated.
"Yes, yes I would love to".
Harper's face relaxed a bit.
"Excellent! So how's...let me think of my schedule...how's Friday? Does it work for you?"
"Friday's perfect" she confirmed, her smile barely containing the happiness that surged inside her.
She forgot about the rest of the world for a moment: Harper was smiling back at her, the most beautiful smile Meredith had ever seen when the charm was broken by a sudden realization that made the surgeon laugh.
"I'm just afraid...I don't have your number. I don't know how to text you the address" she smiled apologetically.
"Oh right!" Meredith chuckled too.
"It's usually the other way round, right? First number then asking out" Dr. Emery shook her head as she picked her phone out of her purse.
"I suppose, buy it doesn't matter, we can change that" Valentine commented outstretching her hand.
As Harper handed her her phone, she quickly typed down her number. After checking it twice, she gave it back, a huge smile on her face.
"There, fixed"
Over Harper's shoulder, in the distance, she spotted Kyra and the rest of the group walking in the opposite direction. Rafael saw her and mouthed something about a uber. Valentine nodded and sighed.
"I'm afraid that's my cue, sadly" she apologized. "I have an early shift tomorrow and..."
"It's okay, I would have offered to give you a ride home but I wasn't sure if it would put you in a bad place back at home" Harper smiled, a hint of tease in her hazel eyes. "And I know you're a professional, Valentine"
"Meredith" the fellow corrected her. "Just Meredith"
Dr. Emery's eyes gleamed again in the dark of the night.
"I'll text you very soon, Meredith"
"Please do, D-"
"Harper"
As her friend almost disappeared from view and the neurosurgeon was smiling down at her, Meredith bit her lip and allowed herself to be a little daring for once. Without thinking twice, she leaned closer and pressed a quick kiss on the doctor's cheek, leaving Harper to gape in surprise.
"Thanks for making my day. Goodnight, Harper" she whispered softly.
That said, she parted and walked towards her friends, head over heels and still not fully processing what happened, but feeling like the heroine of a romantic comedy, her own romantic movie as she heard Harper, the Harper Emery who just asked her out and now visibly flustered, whispering "Goodnight, Meredith".
46 notes · View notes
Text
Podcast: The Who What How When and Why of Error Correction
Tumblr media
Why do students make errors? Are errors bad? Should we even bother correcting them? We answer all these questions and more…
Our References on Error Correction
Error Correction in Speaking - The Fun Way: Herbert Puchta (Teaching Teenagers Tip #4)
Dr. Stephen Krashen Plenary KOTESOL International Conference 2011
ISTEK ELT 2013 Concurrent Keynote - Jeremy Harmer "Does Correction Work? It Depends Who You Ask!"
  Tracy Yu:  Welcome to the "TEFL Training Institute Podcast." The bite‑size TEFL podcast for teachers, trainers and managers.
Ross Thorburn:  Hi, everyone.
Tracy:  Hi, welcome to our podcast.
Ross:  A lot of the time when we're hanging out and we speak Chinese to each other, I often ask you to correct my Chinese if I make any mistakes. When you do, it's really annoying.
[laughter]
Tracy:  Why is that?
Ross:  I don't know. It's like there's something about being corrected. You always feel that you're making a comment about how bad my Chinese is and it really annoys me. I don't know, it's funny. I always say, "Can you please correct me more?" but when you do, it's really annoying.
Tracy:  Do you think that helps you?
Ross:  Yes, but it's bad for your motivation because you feel annoyed by it.
Tracy:  What's the point? [laughs]
Ross:  The point is that today our podcast is about error correction and helping students and trainees and stuff learn from their mistakes.
Tracy:  As usual, we got three main questions or areas that we're going to discuss.
Ross:  First one is, why do students make errors?
Tracy:  The second one, should we correct errors?
Ross:  Finally, what principles are there in correcting students' errors?
 Why do students make errors?
Ross:  Why do students make errors?
Tracy:  One reason is, is an evidence of learning and is a part of the learning process. We learn how to drive and we learn how to...
Ross:  Swim. [laughs]
Tracy:  ...cook, how to swim and new skills. We usually make some mistakes and then from the mistakes, we can learn how to do it better.
Ross:  Yeah, no one does anything perfectly the first time.
Tracy:  The first time, yeah.
Ross:  That's impossible. Something I found really interesting about developmental errors is this thing called...we're not going to go too much into the weeds here with Second Language Acquisition, but I just wanted to mention this because I thought it was so cool.
This is an example of U‑shaped acquisition from Rod Ellis' book, "Second Language Acquisition." Instead of me reading them out, Tracy, can you just make a sentence with each of them and I'll do a commentary?
Tracy:  Sure.
Ross:  This is for students acquiring ate, as in the past tense of eat.
Tracy:  I eat pizza last night.
Ross:  This is when you've not been able to mark the past tense, that's all, which is the first stage, and then...?
Tracy:  I ate pizza last night.
Ross:  Really interesting, right? The first type of past tense verbs that students acquire are irregular ones, which Tracy just learned. Next?
Tracy:  I eated pizza last night.
Ross:  This is after you've started to learn the past tense rule of adding ‑ed onto the end of things, but you've overused it. You've overgeneralized it.
Tracy:  I ated pizza last night.
Ross:  Here you've made some hybrid between the two, and the final one?
Tracy:  I ate pizza last night.
Ross:  Great.
Tracy:  Which is correct.
Ross:  Which is, yeah, you've now acquired it. Congratulations.
Tracy:  [laughs] Thank you, but the second and the fifth stage, I used the words correctly, but it doesn't mean I was at the same stage of acquiring the language.
Ross:  Yeah, which is so interesting. This is such a great example, because it shows how making errors is evidence that you're developing.
Anyway, that was the developmental kind. What's the other main reason that students make errors?
Tracy:  Maybe they directly translate from their first language to the language they study?
Ross:  It's not always a direct translation, but yeah, call it L1 transfer.
Tracy:  Transfer, yeah.
Ross:  A long time ago, people thought that all the errors came from that. Gradually, they came to realize that that's not the case and a lot of the errors that students make are the same regardless of their first language. Part of the transfer errors, they're actually harder to get rid of than the developmental errors.
 Should teachers correct students’ errors in ESL classes?
Ross:  Let's talk about the next one. Should we correct errors? What do you tell teachers on teacher training courses?
Tracy:  I think it really depends. Sometime, I tell them to ignore that.
Ross:  Wow, OK. When do you say to ignore errors?
Tracy:  Two main scenarios. Number one, if it's not really in a learning setting. For example, you haven't seen the students for a while and saw the students, have a chat, and then students really talkative and very motivated and probably make some mistakes and then have errors in their sentences. Really, to be honest, I don't think that's a great context for us to correct their errors.
Their motivation was not to learn much, they want to communicate with you. It's probably going to demotivate the students. The second scenario is if the error is really not impeding the communication that much, you probably want to ignore it.
Ross:  Yeah, right. Actually, I'm going to play you a little Jeremy Harmer quote about what you were talking about there, this process of deciding if you should correct an error or not.
[pre‑recorded audio starts]
Jeremy Harmer:  Every time a student makes a mistake in class, you have to make a judgment. That's actually not true, you have to make about four or five judgments. The first judgment you have to make is, "Was it wrong?" The second judgment is, "Actually, what was wrong?" because sometimes it's not that easy to work out what was wrong.
The third judgment you have to make is, "Should I correct it or should I just let it go?" The fourth judgment you have to make is, "Should I correct it or should somebody else correct it?" Suddenly in that one moment when students just make a mistake, you have to work out what to do.
[pre‑recorded audio ends]
Tracy:  There are four main things that we need to consider immediately when the student make mistake. They are who, when, what, and how.
Ross:  What was the error? Yeah, because this is sometimes difficult to tell. Is it a pronunciation mistake or is it lexical or is it grammatical or...?
Tracy:  Who's going to correct it?
Ross:  It could be the teacher. You could try and do peer correction, you could try and get the person to correct themselves, I suppose.
Tracy:  Yeah, or even small groups some times. When? Should you correct the error immediately, or you're waiting? We always say delayed.
Ross:  The last one was?
Tracy:  How. What kind of techniques you are going to use?
Ross:  Good, hang on to that thought, because we'll talk about that in the next segment. I actually wanted to play another quote. This one's from Stephen Krashen. This is what Stephen Krashen thinks about error correction.
[pre‑recorded audio starts]
Stephen Krashen:  Output plus correction. You say something, you make a mistake, someone corrects it. You change your idea of what the rule is. The six‑year‑old ESL child comes into the class and says to the teacher, "I comes to school every day."
Teacher says, "No, no, I come to school every day." The child is supposed to think, "Oh yeah, that s doesn't go on the first person singular, it goes on the third person singular."
I think that's utter fantasy, but that's the idea.
[pre‑recorded audio ends]
Ross:  It's quite interesting. He thinks error correction is a complete waste of time. Dave Willis, the task‑based learning guru, pardon, he's someone else, just thinks error correction doesn't work.
Tracy:  Oh really?
Ross:  Not everyone says that but I just wanted to give an example of both.
Tracy:  That's quite confusing though. Should we correct or...?
Ross:  There's other research that says that you should and it does make a difference in some situations, but not in other ones. I think there's the research, not quite conclusive.
Tracy:  Definite law students haven't read about this research.
[laughter]
Tracy:  They have really high demand in classroom from teachers to correct their errors, because otherwise, you don't think they learn anything.
Ross:  For me, that's true. That at least some of the value in coming to a language class is you get your errors corrected, because input, you can buy a book or you can watch TV. There's lots of ways you could get input, maybe not always great for practice. A lot of people in a lot countries do have opportunities to practice English.
Here in Beijing, you could just go to a Starbucks and try and find a foreigner or some people might have to speak English for work. The big advantage of going to a language class is that you get correction.
Tracy:  This makes me think of the students actually, in my class which I just taught this afternoon. Is about some phonological aspects and she told me at the end of the class, she said, "Oh no, I've finally realized I have no knowledge, no idea and no awareness of the features of connected speech, because I study English for so long, but I always have trouble to understand people in the listening."
If I didn't have that correction in my lesson, I think she'd probably not be able to aware of the features for a long time.
Ross:  Yeah, absolutely. Good, you should send that to Stephen Krashen.
 How should teachers correct students’ ESL errors?
Ross:  Let's talk about some principles for error correction. We'll just pretend that we've ignored Stephen Krashen, we've decided that when students actually made an error. What do you think are some good ideas or best practices or advice on correcting errors?
Tracy:  I will say, the first one is, don't correct all the errors.
Ross:  Yeah, it'd be way too many, right?
Tracy:  Yeah.
Ross:  That'd be really annoying.
Tracy:  [laughs] Yeah. They won't have much time to really practice.
Ross:  I think as well, we know from Second Language Acquisition that not all of the errors that you correct are actually going to help the students.
Tracy:  Just try to prioritize errors. Of course, again, the fundamental stuff. Was your lesson aims are and then what kind of language or skills that you are trying to focus on in your class. Stick to those. That should be prioritized.
Ross:  Another thing to add is correct errors that effect more students instead of fewer students. I agree, if it's in your plan, then correct it, but I also think if it's a problem all the students are having or most of the students are having, then it's probably worth correcting.
That's a bit about what to correct, how about some how to correct? Actually, can I play you another quote? I want to make a record for the number of quotes, someone talked, it's number three.
Tracy:  OK, go on.
Ross:  This is Herbert Puchta, I think his name is, talking about an error correction technique.
Herbert Puchta:  Imagine a class where lots of students have problems getting the famous third person "S" right. Take a piece of paper and write an "S" on it. Stick it somewhere on the wall. When a student makes that mistake, point to the paper, wait and smile. Most probably, the student who's just made the error will notice what you want them to do and correct themselves.
Ross:  I thought that was interesting, he also chose the third persons "S" as his example. I think what he's trying to say there is that's a really in‑obtrusive way of correcting a student. You can correct someone as their speaking, by pointing at something, but you don't have to interrupt them.
Another one for how, this may be also related to who, is to try and get the students involved in their correction.
Tracy:  Yeah, I get it, but sorry, I just feel like sometimes...We talk about who and we always want to encourage students themselves to correct themselves. The techniques in how teacher try to raise their awareness of their error is repeating the error.
Ross:  It's interesting that you bring that up because...or the other one is called a recast when the students said something wrong and you repeat it back to them, but they say it right. There's research that shows that when you do that, a lot of students don't realize that you are correcting an error. They just think you're repeating something.
Tracy:  Exactly.
Ross:  What are some ways of raising students' awareness that they've made an error?
Tracy:  What I experimented today was WeChat. Of course, I think there is...
Ross:  For those of you know in China, WeChat's an instant messenger type thing.
Tracy:  I ask the students to join the group.
Ross:  A group chat.
Tracy:  Yeah, group chat. Yeah, before the lesson started. Almost at the end of the class, I listen to what they said, I posted on four or five sentences into the group chat so everybody can see it.
Ross:  What's in these sentences? Mistakes the students have made?
Tracy:  Mistakes and also correct sentences together. Of course, I changed some of the words they are using or the pronouns or places. Yeah, I just, talk to your partners and then tell each other which one you think correct and which one is not correct and the then you think the one is not correct and then you can type the correct ones and then send to the group.
Ross:  I think you also hit on another thing there, that's something to get students involved, but another thing is that, the anonymity. Not singling someone out.
Tracy:  Another thing, I always tell teachers. There should be a correction circle. You raise their awareness, usually we stop and they move on, but not, there should be another step to complete the circle which is, give students another chance to use the language correctly by themselves. For example, the pizza mistakes.
Ross:  I ated pizza yesterday.
Tracy:  I mmm pizza yesterday.
Ross:  I ate pizza yesterday.
Tracy:  What did you have for breakfast today?
Ross:  I ate cereal for breakfast today.
Tracy:  Really? Do you really? [laughs]
Ross:  No, I actually drank coffee today, but...
[laughter]
Ross:  ...this is a different verb. I didn't think it would fit your point.
Tracy:  You know what I mean, just...
Ross:  Yeah, give the students a chance.
Tracy:  It's something can be really simple. Just ask a similar question and they can answer.
Errors Wrap up
Tracy:  We talk a lot about correcting errors, but the examples we were using really focus on the language itself, but don't forget about error correction also related to performance or behavior in class.
Ross:  What does that mean?
Tracy:  For example, teaching young learners and if the student wasn't well behaved, I think we also need to...
Ross:  Give feedback.
Tracy:  ...give feedback on that.
Ross:  Yeah, good point. Bye everyone, thanks for listening.
Tracy:  Bye.
Tracy:  For more podcasts, videos and blogs, visit our website, www.tefltraininginstitute.com.
Ross:  Www.tefltraininginstitute.com. If you've got a question or a topic you'd like to discuss, leave us a comment.
Tracy:  If you want to keep up to date with our latest content, add us on WeChat @tefltraininginstitute.
Ross:  If you enjoyed our podcast, please rate us on iTunes.
0 notes
operationrainfall · 4 years
Text
A while back, I promised I would provide some impressions of a few of the games revealed at the USC Games Expo 2020. And though this has taken longer than I was anticipating, I am a man of my word. I had a few hiccups along the way, but eventually settled on the following 5 projects. None were featured in the keynote address, but they all have merit. There’s a variety of genres here, from mystery to platformer to action adventure to SHMUP and even mobile. To avoid bias, I’ll list the games alphabetically, and share what they do well and poorly.
Although The Death Mask is the first USC Games title listed, it’s paradoxically the last one I played. That’s cause originally I was aiming my sights at another narrative game, but had problems installing it. So imagine my pleasant surprise when I found a murder mystery rife with political intrigue! The Death Mask is touted as a mobile mystery game, but it also works well on Mac and PC. You use the mouse to progress, and drag a flower icon to select narrative choices. You can agree with people, dissent, search for gossip and much more. It’s a simple interface, but in a good way.
I was drawn by the aesthetic style of the game. It uses simple but bright colors, and the masks in the game’s main masquerade are all eye catching. While you read and make choices, relaxing ball music sets the mood. You play a guard searching for clues into the disappearance of a ceremonial blade, and do so by questioning those attending the event. As you play, you’ll gain access to different masks, which allow you to get answers out of unwilling guests. For example, early on I put on a Servant mask so that I could get helpful gossip from a worker. It’s a fun little project, and should appeal to many types of gamer. My only minor complaint is that I found some typos during my time with the game. Other than that, it’s a lot of fun and surprisingly well put together.
This next USC Games project also caught my interest due to the art style. It’s very attractive and full of cute details, and it’s called Riscue. This is a game focused on environmental change. You play a squirrel grabbing nuts before the tide crashes in. Basically you just jump your way through stages by pressing the Up key until the timer runs out, leaping over holes and grabbing tasty food. At the end of each well animated stage you climb a tree and count your nuts.
This slideshow requires JavaScript.
Though I’m totally fine with single button gameplay, and found Riscue fun enough, it does have some issues. First of all, no matter how many nuts I got in a stage, my score was always the same. This immediately killed my interest in playing more. With no story to speak of, and no score system as motivation, it’s hard to find a reason to keep playing. That said, the aesthetic presentation and overall theme of the game still kept me engaged. I think the game could be a great one, it just needs a tighter gameplay focus to match the visual presentation.
This slideshow requires JavaScript.
The next game was one of the most impressive that I spent time with. Scarlet could easily be from some decently sized studio on consoles, it’s that good. Considering it was made by a small team of USC Games students with limited resources, that’s even more impressive. The demo played very smoothly for the most part, and only had some minor slowdown, though that could be attributed to my computer. I played the titular Scarlet as she searched for someone named Emerald in a large facility. I wasn’t clear what my ultimate intent was, though it seemed I was there to end Emerald’s life. Despite that, she kept welcoming me over intercoms, and this discrepancy made me really interested in the plot of the game.
This slideshow requires JavaScript.
But it wasn’t all philosophical. As I wandered about what looked like a factory, I came across hideous insectoids that wanted me dead. I won’t lie, they reminded me a lot of the bugs from Starship Troopers, but they were still cool looking and fierce. I fought them back with slash attacks, both light and heavy, and a great dodge move. I was very happy Scarlet was made with controller support in mind, as my 360 controller worked marvelously. More and more clues were unearthed as I explored between fights, and the demo culminated with a boss battle against a gigantic spider beast. It tried to burn me with concentrated beams of light, as well as slashing at me with giant appendages. Even then, I was more than a match for it. Overall, this is a really good proof of concept, and I find myself genuinely curious to see where this game goes.
This slideshow requires JavaScript.
I’m gonna start with the fact I have never played a mobile game on my phone before. Ever. So I wasn’t sure what to expect from Sky: Children of the Light. And I’ll admit, I had a bit of inherent bias against mobile games, thinking most to be cheap cash ins or shovelware. Not so here. Sky has shockingly good production values, including truly vibrant music and lush if simplistic art. The entire game is told visually, through various animations and cutscenes. Despite that, you can read the emotion in each part of the game.
This slideshow requires JavaScript.
As for how Sky plays, you naturally fly around hunting for clue and artifacts in an expansive and magical world. You gradually increase your capacity to soar the winds as you wander. Though it took me some getting used to, the controls are also pretty intuitive in Sky. One finger controls your movement, swiping with another changes the camera angle, and pressing and holding various icons lets you do things like pick up candles and react with emojis. I admit I didn’t care much about the latter, but I was all for exploring this breathtaking world. If nothing else, Sky has totally changed my negative bias against mobile gaming. And considering this is a totally free game, I can easily recommend it to everyone.
Finally, I’ll close things out with Tri-Ger. I love the simplicity and challenge of SHMUPs, so I had to try one at USC Games Expo. It has pumping music and simple yet attractive artwork. Your goal is to match colors when destroying enemy waves to increase your score. That’s it, pretty simple. But that’s not the same as easy.
Honestly I had my ass handed to me by Tri-Ger. The enemy projectiles weave back and forth like snakes, and it’s easy to get hit unexpectedly. Oh and did I mention the enemy waves are procedurally generated? That said, it’s a cool premise reminiscent of games like Ikaruga, and the tunes were very catchy and enjoyable. Sure I might have appreciated features like secondary weapons or bombs, but the core experience here has a lot of potential.
All in all, I was quite impressed by the diversity of talent shown by the students at USC Games Expo 2020. I may have to reevaluate my expectations on the abilities of college students after this. Sure not every game was breathtaking, but they all had a lot of heart and skill evident in them. I encourage everyone reading this to check out the other games available at the Expo, and open your minds up to new gaming experiences.
Hands-On IMPRESSIONS: USC Games Expo 2020 A while back, I promised I would provide some impressions of a few of the games revealed at the…
0 notes
goodra-king · 4 years
Text
Transcript of Incorporating Storytelling Into Your Sales Process
Transcript of Incorporating Storytelling Into Your Sales Process written by John Jantsch read more at Duct Tape Marketing
Back to Podcast
Transcript
John Jantsch: This episode of the Duct Tape Marketing Podcast is brought to you by Gusto, modern, easy payroll benefits for small businesses across the country. And because you’re a listener, you get three months free when you run your first payroll. Find out at gusto.com/tape.
John Jantsch: Hello and welcome to another episode of the Duct Tape Marketing podcast. This is John Jantsch, and my guest today is John Livesay, he is also known as The Pitch Whisperer. He’s a sales expert and storytelling keynote speaker on sales, marketing, negotiation and persuasion. He’s also the author of a book we’re going to talk about today, Better Selling Through Storytelling, the essential roadmap to becoming a revenue rockstar. So John, welcome to the show.
John Livesay: Thanks for having me, John.
John Jantsch: I think a lot of marketers, even increasingly small business owners are kind of getting into this idea of story telling as a great marketing tactic. But how would you describe storytelling in the sales, purely sales environment?
John Livesay: Well, the old way of selling is to push out a bunch of information, hope some of it sticks. And it just doesn’t work anymore. So what storytelling does, is it allows you to be memorable and magnetic because we’re wired to listen to stories in a very different way than we do when someone’s giving us a bunch of information of features and things. And stories pull us in and also our defenses go down.
John Livesay: When you tell a good story of a case study and turn that into an interesting story with a little bit of drama or personal story of why you became a lawyer or an entrepreneur or an architect, whatever it is you are doing, that’s what people remember about you. And when you’re going up against competitors, if you really want to be memorable, people say, “Oh well, we hope to go last if it’s a final three, but you can’t control that. What you can control is telling a good story.”
John Jantsch: Would you say that this is sort of new to selling? That it’s not the way that maybe was taught in the traditional sales training of 10 years ago?
John Livesay: I would say it is a relatively new awareness of its importance. Traditional selling was, tell them what the features are and then tell them what the benefits are and show how it solves a problem. But there was no story there. I was working with an architecture firm and they traditionally would go in for these final three, one hour presentations, pitches, interviews, whatever you want to call it. And show their work and think, well whoever has the best design to remodel a law firm or an airport, will get the business.
John Livesay: It was all about … Or an ad agency goes in to pitch to win new clients, “Well, here’s our work.” There was just no story about them, or how they came up with the concept or another story of somebody they helped. And so this awareness that whoever tells the best story is going to get the yes, is something that a lot of people are going, “Wow, we really need to learn to become better storytellers.”
John Jantsch: This is off the topic a little bit, but in researching your work in preparation for this interview. I stumbled upon a YouTube video, of you being interviewed by Larry King. And so I’m curious how that came about. Just because I don’t think of Larry King interviewing sales authors.
John Livesay: Well, he has a show called Breakfast with Larry King. And a friend of mine is one of the elite group of people that gets to have breakfast with him on a regular basis. And one of them is named Cal Fussman who was a journalist for Esquire Magazine and Cal’s also a keynote speaker. And he had said, “I’ve got to learn how to sell myself as a speaker and I’m a journalist, I don’t know how to sell.” I said, “Oh, but Cal, you know how to tell great stories and you know how to ask great questions. So let me show you how your journalists skill of storytelling can help you sell yourself.” And that was a big light bulb moment for him. And then he said, “Oh, I want to have you on the show with Larry King.” And I did my research, as you could imagine, Larry’s done over 60,000 interviews.
John Livesay: And I read that he does not like small talk. I had some things ready to go that were about him and not about the weather or anything. And one of it was, he got his big break interviewing Frank Sinatra when he was just as a radio DJ and not a television personality. And I had mentioned to him off camera, I said, “I really love that story of you interviewing Frank Sinatra caused you to get your big break.” And he smiled and said, “That was a good night.”
John Livesay: On camera, he’s looking at my book and he said, “Your book is called Better Selling Through Storytelling, what makes a good story?” And John, I don’t know what made me have the courage to say this. I said, “Well, you have such a great story of how you got discovered by interviewing Frank Sinatra, would you mind telling that story? And then we can break down the elements of that for the audience?” And he goes, “Sure,” so he told the story and then I broke it down into the four elements of what makes a good story, which is basically exposition, painting a picture, there’s a problem and there’s a solution, and then the secret sauce is resolution. And I’m happy to share that story if you want to hear it, but that’s how that all happened.
John Jantsch: That is fun. You mentioned and maybe we can weave the story in there, but I want to also get into some of the other elements of the book. You mentioned one of favorite words, problems. It’s not really a favorite word necessarily, but I’ve discovered that a lot of times people searching for a solution don’t actually know what the problem is or can’t really articulate it. It’s just, I don’t have enough sales or my business just doesn’t feel right.
John Jantsch: And what I’ve found is that storytelling, a lot of times, or at least telling the story of how they maybe got to this point or something, a lot of times helps them actually understand the problem. And I think there’s such a strong connection, at least I’ve discovered the person who can actually describe or articulate or, you mentioned empathy, have empathy with what the real problem is. I think a lot of times has such an advantage, don’t they?
John Livesay: Well, they really do John. I always like to say that the better you describe the problem and show empathy for the people experiencing the problem, the better the potential buyer thinks you have their solution. That’s when you get that aha moment where someone says, “Oh, you get me or you are in my shoes.” And if someone isn’t able in psychotherapy when people come in for therapy, they say, “Oh, I’m here because I’m having trouble sleeping.
John Livesay: And that’s known as the presenting problem. That’s not really the core problem. The problem is they’ve got money issues or whether something else is keeping them up besides sleep problems. So I think the same is true. As salespeople, we need to think of ourselves as almost doctors a little bit, where we’re asking questions and not just accepting the first problem somebody says is the reason they’re here.
John Jantsch: Yeah, because so often they’re not ready to even hear a conversation about what we sell, because they can’t really connect their problem with our solution. I mean, isn’t that kind of a lot of the danger of just showing up and going, here’s what you need.
John Livesay: Yeah, until you realize you have a problem that needs some help, it’s the difference between Advil for a migraine versus you need a vitamin to prevent you sick. It’s like, I don’t really need an Advil, if it’s just the vitamin. But that’s what storytelling is so great at. If you describe another person that’s very similar to the person you’re in front of, and here’s what I found out. You tell the story, two years ago they came to me, they weren’t quite sure what was wrong with their business, they knew they needed more sales and the problem was just sort of hazy for them.
John Livesay: And after working with them, we define that there’s really three obstacles, and here’s what those three obstacles were and here’s the solution we came up with. And now a year after using my product or service, their life is so much better. That’s the resolution. Their sales are up 10%, they’re not stressed out, they feel better. So you’re giving all kinds of … And if that sounds like the kind of journey you’d like to go on, then we might be able to work together. Now you’re closing question is, because that sounds like the kind of journey you’d like to go on, not do you want to buy my product?
John Jantsch: You just showed me how to structure a story around a problem. What about the what’s every salesperson’s initial problem? I don’t get a chance to tell the story because I can’t get my foot in the door. Is there a way to use storytelling or, I know you talk a lot about elevator pitches for gaining trust. How do you get that kind of first chance to tell the story?
John Livesay: Well, I think a lot of it is to be aware that people have three unspoken questions before they let you come in. Or even when they’re on the phone or in person with them. And the first one is, it’s a gut thing, do I trust you? And that’s really whether it’s a fight-or-flight response came. Is it safe to talk open this email? Is it safe to even have a conversation with you? And it moves from the gut to the heart, do I like you?
John Livesay: Are you showing any empathy, likability? And then it goes into the head and you might be telling a story about how you’ve helped other people. People are thinking, “Well, would this work for me?” And if they can’t see themselves in the story, they still won’t do it. So I think getting your foot in the door, especially if you’re here to, let’s say a networking event, a good elevator pitch is not an invitation for a 10 minute monologue.
John Livesay: I tell people, make it very conversational. Literally start out with, “You know how a lot of sales teams are struggling to make themselves be memorable and not just be selling on price? Well, what I do is I help people go from invisible to irresistible and I’m called The Pitch Whisper.” And that’s all I say, and that usually intrigues people enough to say, “Huh, what’s a pitch whisper?” Or, “How do you go from invisible to irresistible?” But you described the problem of, “Oh yeah, I’m struggling with being memorable,” or “I’m struggling with only being seen as a commodity.”
John Jantsch: Everyone loves payday, but loving a payroll provider? That’s a little weird. Still, small businesses across the country love running payroll with Gusto. Gusto automatically files and pays your taxes. It’s super easy to use and you can add benefits and management tools to help take care of your team and keep your business safe. It’s loyal, it’s modern, you might fall in love yourself. Hey, and as a listener, you get three months free when you run your first payroll. Try a demo and test it out at gusto.com/tape that’s gusto.com/tape.
John Jantsch: We’ve all probably seen that person that just holds a whole entire dinner party wrapped with their storytelling. They just seem to be really good at it. Is there a way for … Because I’m sure there’s a whole lot of listeners out there going, “Well, I’m just terrible at it, I can’t think of a story to tell. I stumbled through the details,” or whatever they’re thinking. Is there a way to get better at it?
John Livesay: Yes, it’s like any other skill. You practice it, the awareness of what makes a good story are those four elements that I talked about earlier. Don’t just start in describing the problem. Give us some perspective, in order for us to be in the story, we have to paint the picture. And have a little bit of drama in your problem.
John Livesay: Don’t make the problem seem so easy that it’s not interesting, and there’s no conflict or it’s a suspense of whether it’s going to get solved or not. And a really great story has a little resolution bumper surprise to it that makes people go, “Oh,” and you know, you’ve told a really great story John, when other people want to share it with their friends.
John Jantsch: Do you think of readily of an example, that bumper surprise element?
John Livesay: Yes. Let’s go back to the Larry King example. So Larry King gets the opportunity to interview Frank Sinatra at a time when nobody … He wasn’t doing any press interviews because his son had just been kidnapped. This is in the 60s and he was really mad at the media because they were saying it was due to Frank Sinatra’s mafia connections. So Jackie Gleason is a friend of Larry King’s from an interview and offers to set up the interview. Goes really well and Frank brings up the kidnapping and so it was great. And then he invites him to bring a date to come here and sing the next day.
John Livesay: And Larry’s thinking, “Oh man, this is great. Whoever I bring is going to think I’m really hot stuff.” And Larry didn’t have a lot of money at the time. And they’re sitting at the front table by the stage and Frank calls his name out. And, so Larry is just like, “Oh, the evening couldn’t have gone better.” And he’s driving his date back to her place and she’s like, “Oh, stop here and buy some coffee for tomorrow morning, I don’t have any.”
John Livesay: And this is before a lot of ATMs, and credit cards were being used and Larry didn’t have any cash on him. He didn’t want to blow the whole cool guy image, so he walks into the store, comes back a few minutes, she’s like, “Where’s the coffee?” He goes, “They couldn’t change a hundred.” That’s the resolution of the story. Now, he just had the story of, “I interviewed Frank Sinatra, I got my big break.” That’s interesting, but it’s not nearly as memorable as that whole journey of the date.
John Jantsch: Yeah, so how do salespeople … I mean, how do you suggest, because again, that was a great story. Even people that have things like that, that happened in their lives sometimes don’t connect all the dots to that being great story. How do we kind of unearth those great stories? Because I think, obviously with salepeople, sometimes it’s a client thing or, but I always find the best stories or stuff that happened to us.
John Livesay: Well, I can tell you an example of I’m helping Gensler the world’s largest architecture firm, win $1 billion sale renovating the Pittsburgh Airport when they were up against two other firms and they were literally told, “Look, anybody can do … You’re all in the final three. You can all do the work. We’re going to hire the people we like the most.” And that’s when they went, “Whoa,” these soft skills actually make you strong. Soft skills of storytelling, confidence, likability, empathy.
John Livesay: The story that I helped them turn their case study, which they basically had some great before after pictures of another airport and another airline that they had helped, but there was no story there. So we use the same structure, we’d said, okay, two years ago the exposition is, JFK approached us to renovate the waiting for Jet Blue. And the problem was during that time we had to rip up all the floors in the middle of the night, and get it all done so that the stores could open at 9:00 AM the next morning without losing revenue.
John Livesay:  We had all our vendors on call during the night and sure enough at two in the morning, a fuse blew and we had somebody there in 20 minutes to fix it. And at 8:59 the last tile went down and all the stores opened. And then a year after the design, sales are up 15% of the retail stores because people are spending more time shopping because of what we’ve done with our design.
John Livesay: That is hitting all of the elements. The exposition, we know what airline, when all of that. So we’re there, we see it, then we know the problem. Got to rip up all the floors, there’s a little bit of drama. And so instead of just saying, “We used critical thinking when we do a project.” They showed it in a story instead of telling it. And then the solution is the store is open on time, but the resolution of that story is sales are up 15% because of the design a year later.
John Jantsch: Yeah, the value. All right, so I’m telling the story and it’s going really well. I’ve got a great story, but then the objections come. And maybe it’s a different skill, but it’s going to happen. How do we link the story together with maybe the objections?
John Livesay: The two most common objections are, we don’t have enough money or, this isn’t a good time for us to make a decision, correct? So your question is, how can storytelling help overcome one of those kinds of objections?
John Jantsch: Yeah, maybe. Because I’m thinking people get good at this story part, and it paints a good picture, but there’s still quite often in the sales process going to be objectives. And that’s my objections, I’m sorry.
John Livesay: Let’s take the most common one, which is your price is too high. And we can use a story along with the concept of, our client Jet Blue or JFK, when we gave them the bid, they felt that, “Gosh, this is more expensive than we thought.” And we explained to them that when we did another airport in Toronto, that the reason that we needed to have this budget higher than expected. And then they just went on to tell another story, where they describe a problem and a solution and they were so glad they had that money budgeted, so that they didn’t have to go fixing something in advance is much less expensive than having to fix something that you didn’t even plan possibly going wrong.
John Livesay: That sometimes money you invest in things prevent problems now, and all that good stuff. So again, storytelling is a way to handle objections. You just say, you don’t make them feel crazy for bringing up the question. First of all, you listen and you look at it as a buying sign and then you say, “Let me tell you a story of somebody else who felt the same way, and here’s how they ended up justifying the cost or where they found the money or whatever it is.”
John Jantsch: I’ve always been a big fan of case studies. Showing somebody, “Oh yeah, your kind of business, here’s a result we got for them.” I mean in a lot of ways, couldn’t you use this idea of storytelling more effectively in written documents and webpages as well?
John Livesay: Yes, I think you can certainly with … You don’t have the opportunity to present your case studies in person or on the phone. Make sure that the case studies you have on your website use the same story telling structure just went over so that people are taken on a journey and that’s not just a bunch of before and after pictures with no story.
Speaker 2: Right, which is the typical sort of, here’s the problem, here’s the solution. Do you think in terms of companies equipping their salespeople or just a salesperson going out there and training themselves. Do they need to be looking for new skills, different skills?
John Livesay: I think we’re always needing to keep our skills honed and practiced. And when you get to the place where you think you know everything and you don’t need to practice anymore, is when you really are not at your best. If Tiger Woods still gets coaching and actors who’ve won Academy awards still rehearse, we as salespeople definitely need to keep practicing.
John Jantsch: I’m assuming you do consulting on this very idea because you’ve talked about a couple examples of that. Do you have a process when you walk in? Do you have to start unpacking, finding, unearthing these stories and then say, “Yeah, that’s something that you guys ought to be using.” What’s your process for finding those with a company?
John Livesay: Well, if I’m helping a company prepare for this one hour interview against competitors, the process is, we reverse engineer the ending of the presentation. So many endings are, “Well that’s all we got, any questions?” Horrible ending. We work on, what do you want the audience of the [inaudible] buyers to think, what do you want them to feel and what do you want them to do? We develop answers for that and that’ll be our closing. And then I said, “Okay, what’s going to be the opening?” “Oh, thanks for this opportunity, I’m excited to be here.”
John Livesay: Ugh, nobody cares that you’re excited. It’s not about you. Let’s make sure that the opening pulls in our understanding of the problem and why we’re the right people to solve it. And then we look at the team slide, make sure there’s some really interesting stories about why you became an architect or a lawyer or whatever it is you’re doing as opposed to, “Hi, my name is Joe, I’ve been here 10 years.” Nobody cares.
John Livesay: But when I was 11, I played with Legos and that’s what inspired me to become an architect. Now I have a son who’s 11, I still play with Legos and I would bring that passion to this job. Well that’s personal, memorable, all that. And then I work with, as we said, the case studies, turning those case studies into stories. That’s my process, that helps people win because, the problem remember again, is they’re not memorable, stories making memorable and instead of pushing out information, stories make you magnetic that you pull people in.
John Jantsch: Yeah, the process you just described doesn’t sound terribly unlike, you might prepare a keynote speech does it?
John Livesay: It’s very similar, and people have to realize you’ve got to practice it and has structure and there’s pauses and timing. Once we have the content down, then we start working on the delivery.
John Jantsch: Speaking with John Livesay, author of Better Selling Through Storytelling. So John, you want to tell people where they can find more information on you and of course, pick up a copy of the book.
John Livesay: Right? If you text the word pitch, P-I-T-C-H, to six, six, eight, six, six, I will send you a free sneak peek of the book. Or you can go to my website, John Livesay, L-I-V-E-S-A-Y. Or if you can’t remember any of that, just Google The Pitch Whisper and my content will come up.
John Jantsch: Awesome. Well, John, it was great to finally getting this recorded and hopefully we’ll run into you soon, out there on the road.
John Livesay: Thanks John.
from http://bit.ly/2TqfoJm
0 notes
givencontext · 4 years
Text
Recently Reading
My last blog was about watching terrible television, so you might think I haven’t been reading much. In fact, my television watching is less bingeing and more carefully planned portions (like Bright Line Eating.) This means I am *still* watching my way through Highlander. Meanwhile, I have actually been reading several books. I currently don’t have the wherewithal to make a whole blog post about any one of these books, so this week you get a run-down post compiling several of my recent reads. Enjoy!
Scary Stuff
Since this post covers things I read in October, it includes some creepy stuff. (Of course, I reserve the right to read creepy stuff year-round.) The longest and creepiest was Dr. Sleep. It has only been a few months since I listened to the audiobook of The Shining while on vacation. With the movie coming out, I had to dig in to Dr. Sleep. I have listened to a lot of Stephen King in the last year or two. He gets a little long-winded, but so do I, so I can’t fault him for that. I don’t know when his birthday is or anything, but I consider myself a fan. In other words, I feel a sort of moral obligation to like everything he does. That being said, I really did like Dr. Sleep. Obviously the book is better than the movie. Considering how different the movie of The Shining is from the book, I feel like they did a good job trying to make the movie of Dr. Sleep more of a sequel to the movie while the book is more related to the book. Read and watch both. Compare. Contrast. Repeat. Oh, the life of an English major! I foresee people making entire careers out of these two books and two movies.
Another book I read that was more creepy than scary was The Witch of Hebron by James Howard Kunstler. I have been wanting to read this one for a while. It is set around Halloween in post-apocalyptic town of Union Grove, New York. This is the second book in The World Made by Hand series, and I read the first book twice. I finished this book on Halloween night, which was absolutely perfect. Now I am 61% finished with the third book, A History of the Future. This one is set around Christmas, so it’s a perfect winter read. There are four books in the series with the fourth being The Harrows of Spring, which I think means I will not be disappointed if I wait until spring to read it. Four books, four seasons, get it? I have to admit that my favorite character in these books is Brother Jobe. He is creepy and has some sort of supernatural power that he augments with just plain pushiness. I think he might be the best example of how to survive and thrive after the apocalypse.
These books had me thinking about the traditions surrounding Halloween, Day of the Dead, and Christmas. You might not know that there used to be a tradition of telling ghost stories on Christmas Eve. That’s where Dickens got the idea for A Christmas Carol. I learned about this in a literature class about “ghosts and gender.” In that class we read The Turn of the Screw, so I decided to read it again. There will also be a movie based on this one soon, but I am on the fence about seeing that one. This is a classic ghost story and a quick read. If you haven’t read it, you should.
SFF
For Sci Fi book club recent reads have been Galactic Forge (book one) and All Systems Red (book one in the much-acclaimed Murderbot Diaries series.) We read a lot of book ones in Book Club. I think I have mentioned it before. Galactic Forge was a fun read, but I won’t be bent out of shape if I never get around to reading the rest of the series. Murderbot on the other hand… I hope the second book made next year’s book club list, because I definitely want to find out what happens next. I like that our book club occasionally takes on a Book Two. For a long time it was the same core group of attendees, so it was safe to suggest a sequel if book one had been on the list. Thanks to Kathy sharing the details on Meetup, we have had an influx of fresh blood, so a few people might be thrown off next month when we discuss Children of the Divide, which is actually Book THREE in a series.  If it is similar to the second book in the series, Trident’s Forge, it did a good job of being a sequel that could also stand alone. I need to finish A History of the Future so I can start this one. I am very grateful that my tap dancing class changed nights so that I can make it to Sci Fi Book Club again. It is such a great group. This is one of the highlights of my month. I enjoy chatting about books… Who knew?!
After I wrote about My Favorite Fantasy, I bought the latest book by favorite fantasy author, Raymond E. Feist. King of Ashes is yet another book one. Not only that, but the second book doesn’t hit shelves until the middle of next year. Maybe I should have thought longer about that purchase decision, because now I have to ponder what comes next in the Firemane Saga for months before I get to find out. It was nice to be reminded that people are still writing fantasy though. Sometimes I forget that all fantasy books are not dusty old classics. The genre is alive and well.
Rage Becomes Her
This book needs a post of its own, and I plan to give it one in the future, but for now, READ THIS BOOK! Rage Becomes Her: The Power of Women’s Anger is a book with the power to change lives, our culture, (dare I say it?…) THE WORLD. I learned about this book when the author was the keynote speaker at a women’s conference I attended. Last year I went to the same conference. I had a great time. I networked, and I learned some new stuff. My only complaint was that a lot of what I heard was the same old stuff that I have heard as a woman in corporate America for the past 19 years… just realized this week is literally the 19 year anniversary of my first corporate gig, wow… Anyway, for the past 19 years I have been told that a.) I should feel like I can bring my 100% authentic self to work and add value with my unique perspective while (simultaneously) being told b.) if I want to be more successful there are just a *few* things I need to change about myself. If you are a woman reading this, you already know what I’m going to say: Be. More. Assertive. Right? But don’t forget not to be pushy, mm’kay? And I don’t need to explain to women how it feels to walk an actual, literal, albeit virtual tightrope each day. Because as women, we all know that we’re not doing enough and the answer to patriarchal oppression is for women to DO MORE. Oh yeah, are you with me? Because you might have guessed that after 19 years of this… I. Am. Angry.
I think this is a good article that can ease you into the idea that it’s not women who need to change to fit into the world. The world should be a safe space for women and men alike. If you don’t understand that women spend a lot of time not feeling safe, well, then… you are either a man or you have internalized misogyny to a degree that you probably voted against yourself (and the rest of us) in the last election. I don’t have the certifications necessary to fix you, but you should get some help.
After leaving the feedback “stop telling women to act like men” after last year’s conference, I was very happy with the fresh content this year. We talked about white privilege, we talked about vulnerability and trust, and to wrap up the conference, we talked about anger. Women’s plate-throwing anger. Soraya Chemaly gets me. She gets US. This book is a thing of beauty and a national treasure. It won’t make you angry. It will remind you that you already are. I need more of my friends to read this so we can talk about it. I need to read it repeatedly and take notes. I think I need to go and edit my post about Life-Changing Books to include this one. Much more on this book later.
 In Conclusion
I’m still reading. I want to talk about books and blog about books. Right now, it’s hard to sit down at the computer and organize my thoughts. I’m in a weird in-between state. I’m trying to appreciate the in-betweenness. It’s like a crepuscular period in my life. It would be inauthentic to say I’m not struggling. Quite frankly, I feel like I could stay in bed for two weeks and not feel rested. And I don’t get to do that. I’m literally getting ready to post this from the road while on the run. First law of motion, right? Keep moving. That’s the only way to get to the other side. If you’re going through Hell, keep going. Keep reading. Keep blogging. Keep doing your best until it gets better.
from WordPress https://ift.tt/2XQyfxD via IFTTT
0 notes
thanhtuandoan89 · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
drummcarpentry · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
gamebazu · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
isearchgoood · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
via Blogger https://ift.tt/32vhQiG #blogger #bloggingtips #bloggerlife #bloggersgetsocial #ontheblog #writersofinstagram #writingprompt #instapoetry #writerscommunity #writersofig #writersblock #writerlife #writtenword #instawriters #spilledink #wordgasm #creativewriting #poetsofinstagram #blackoutpoetry #poetsofig
0 notes
bfxenon · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
kjt-lawyers · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
paulineberry · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
ductrungnguyen87 · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
localwebmgmt · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
daynamartinez22 · 4 years
Text
What Is BERT? - Whiteboard Friday
Posted by BritneyMuller
There's a lot of hype and misinformation about the new Google algorithm update. What actually is BERT, how does it work, and why does it matter to our work as SEOs? Join our own machine learning and natural language processing expert Britney Muller as she breaks down exactly what BERT is and what it means for the search industry.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we are talking about all things BERT and I'm super excited to attempt to really break this down for everyone. I don't claim to be a BERT expert. I have just done lots and lots of research. I've been able to interview some experts in the field and my goal is to try to be a catalyst for this information to be a little bit easier to understand. 
There is a ton of commotion going on right now in the industry about you can't optimize for BERT. While that is absolutely true, you cannot, you just need to be writing really good content for your users, I still think many of us got into this space because we are curious by nature. If you are curious to learn a little bit more about BERT and be able to explain it a little bit better to clients or have better conversations around the context of BERT, then I hope you enjoy this video. If not, and this isn't for you, that's fine too.
Word of caution: Don't over-hype BERT!
I’m so excited to jump right in. The first thing I do want to mention is I was able to sit down with Allyson Ettinger, who is a Natural Language Processing researcher. She is a professor at the University of Chicago. When I got to speak with her, the main takeaway was that it's very, very important to not over-hype BERT. There is a lot of commotion going on right now, but it's still far away from understanding language and context in the same way that we humans can understand it. So I think that's important to keep in mind that we are not overemphasizing what this model can do, but it's still really exciting and it's a pretty monumental moment in NLP and machine learning. Without further ado, let's jump right in.
Where did BERT come from?
I wanted to give everyone a wider context to where BERT came from and where it's going. I think a lot of times these announcements are kind of bombs dropped on the industry and it's essentially a still frame in a series of a movie and we don't get the full before and after movie bits. We just get this one still frame. So we get this BERT announcement, but let's go back in time a little bit. 
Natural language processing
Traditionally computers have had an impossible time understanding language. They can store text, we can enter text, but understanding language has always been incredibly difficult for computers. So along comes natural language processing (NLP), the field in which researchers were developing specific models to solve for various types of language understanding. A couple of examples are named entity recognition, classification. We see sentiment, question answering. All of these things have traditionally been sold by individual NLP models and so it looks a little bit like your kitchen. 
If you think about the individual models like utensils that you use in your kitchen, they all have a very specific task that they do very well. But when along came BERT, it was sort of the be-all end-all of kitchen utensils. It was the one kitchen utensil that does ten-plus or eleven natural language processing solutions really, really well after it's fine tuned. This is a really exciting differentiation in the space. That's why people got really excited about it, because no longer do they have all these one-off things. They can use BERT to solve for all of this stuff, which makes sense in that Google would incorporate it into their algorithm. Super, super exciting. 
Where is BERT going?
Where is this heading? Where is this going? Allyson had said, 
"I think we'll be heading on the same trajectory for a while building bigger and better variants of BERT that are stronger in the ways that BERT is strong and probably with the same fundamental limitations."
There are already tons of different versions of BERT out there and we are going to continue to see more and more of that. It will be interesting to see where this space is heading.
How did BERT get so smart?
How about we take a look at a very oversimplified view of how BERT got so smart? I find this stuff fascinating. It is quite amazing that Google was able to do this. Google took Wikipedia text and a lot of money for computational power TPUs in which they put together in a V3 pod, so huge computer system that can power these models. And they used an unsupervised neural network. What's interesting about how it learns and how it gets smarter is it takes any arbitrary length of text, which is good because language is quite arbitrary in the way that we speak, in the length of texts, and it transcribes it into a vector. It will take a length of text and code it into a vector, which is a fixed string of numbers to help sort of translate it to the machine. This happens in a really wild and dimensional space that we can't even really imagine. But what it does is it puts context and different things within our language in the same areas together. Similar to Word2vec, it uses this trick called masking. 
So it will take different sentences that it's training on and it will mask a word. It uses this bi-directional model to look at the words before and after it to predict what the masked word is. It does this over and over and over again until it's extremely powerful. And then it can further be fine-tuned to do all of these natural language processing tasks. Really, really exciting and a fun time to be in this space.
In a nutshell, BERT is the first deeply bi-directional. All that means is it's just looking at the words before and after entities and context, unsupervised language representation, pre-trained on Wikipedia. So it's this really beautiful pre-trained model that can be used in all sorts of ways. 
What are some things BERT cannot do? 
Allyson Ettinger wrote this really great research paper called What BERT Can't Do. There is a Bitly link that you can use to go directly to that. The most surprising takeaway from her research was this area of negation diagnostics, meaning that BERT isn't very good at understanding negation. 
For example, when inputted with a Robin is a… It predicted bird, which is right, that's great. But when entered a Robin is not a… It also predicted bird. So in cases where BERT hasn't seen negation examples or context, it will still have a hard time understanding that. There are a ton more really interesting takeaways. I highly suggest you check that out, really good stuff.
How do you optimize for BERT? (You can't!)
Finally, how do you optimize for BERT? Again, you can't. The only way to improve your website with this update is to write really great content for your users and fulfill the intent that they are seeking. And so you can't, but one thing I just have to mention because I honestly cannot get this out of my head, is there is a YouTube video where Jeff Dean, we will link to it, it's a keynote by Jeff Dean where he speaking about BERT and he goes into natural questions and natural question understanding. The big takeaway for me was this example around, okay, let's say someone asked the question, can you make and receive calls in airplane mode? The block of text in which Google's natural language translation layer is trying to understand all this text. It's a ton of words. It's kind of very technical, hard to understand.
With these layers, leveraging things like BERT, they were able to just answer no out of all of this very complex, long, confusing language. It's really, really powerful in our space. Consider things like featured snippets; consider things like just general SERP features. I mean, this can start to have a huge impact in our space. So I think it's important to sort of have a pulse on where it's all heading and what's going on in this field. 
I really hope you enjoyed this version of Whiteboard Friday. Please let me know if you have any questions or comments down below and I look forward to seeing you all again next time. Thanks so much.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes