Tumgik
#will do everything in my power to ensure my next job is in retail rather than food service :-/ its sort of beyond my control but god i neve
steelycunt · 2 years
Text
Tumblr media Tumblr media Tumblr media
JUST FINISHED MY LAST SHIF T AT MY JOB EVERRRRR MOODBOARD!!!!!!!! LOVE AND LIGHT TO EVERYBODY!!!
28 notes · View notes
voltagesmutter · 4 years
Text
Satan x MC - Public.
Pairing: Satan x MC (F).
Fandom: Obey Me.
Prompt:  Asphyxiation || Orgasm Denial || Public
Warning: Warning: Public sex, mutual pining, vaginal sex, hand-job, fingering, talks of masterbation, angry Satan.
Day 2 of Kinktober event/prompt list.
Amazing thank you to @crystal13unny for being my beta 💛
Mid-terms were looming and overstressed didn’t come close to explaining how everyone felt. Whilst life was normally chaotic amongst demons, this way beyond words due to how dysfunctional life had become. Mammon had been “retail therapy” shopping so much that goldie broke, Asmo repainted his nails at least four times a day, while Leviathan's beloved console was becoming dusty with neglect. All because Lucifer was bringing forth tenfold his normal strictness to ensure revisions happened, causing tension to run through the household.
Beel was stress eating, which meant consuming triple what he normally ate, hardly leaving anything in the fridge. To say Lucifer lost his shit would be an understatement. He was livid causing sparks of arguments to fly across the house of Lamentation. Of course as soon he started, Satan retaliated as he always did. 
“Will you be considerate for just once and not eat us out of food!” Slamming the fridge shut, his red eyes pulsing with anger, “Shouldn’t you be studying rather than indulging in your gluttonous ‘habits’,”. It was asked as a question but meant clearly as a statement.
“Lay off him Lucifer! You know how he gets around exams,” The blonde rolling his eyes at Lucifer's comment, the others around him on the table all going silent, eyes falling upon him before darting back to Lucifer for his response. 
“Don’t you start getting involved Satan, keep your nose out where it doesn’t belong,”.
“Doesn’t belong? You're openly yelling at Beel for something he can’t help, just back off Lucifer, I am sick of you breathing down everyone’s necks!” the sound of Satan's fist slamming down on the oak table range clearly through the air. The tension was thick in the room, one wrong move and it would set alight, causing a blaze to rupture in the household. All it took was a tutt and snide remark from Lucifer before Satan dashed across the room and pinned him against the kitchen counter, fists balled up in a firm grasp with the material of the eldest shirt.
“Another word Lucifer and I swear,” Satan hissed, his normal soft demeanour now lost in a sea of wrath. Green eyes normally the colour of soft apple candy now shone dark with tinges of yellow, the iced-cold anger behind them enough to make any human freeze and shiver with fear. Lucifer was no human and went unfazed at the younger's outburst, simply pushing him off him with a, “If you cared this much as your midterms rather than your dopey siblings you could actually pass them with a decent grade,”.  
It took the strength of Leviathan and Mammon, quickly running from the table to grip Satan to stop him from swinging at Lucifer. A simple tut, a wave of his hand and a “childish,” was all he left them with before exiting. 
“I’ll kill him, I’ll actually fucking kill him,” Satan screamed, chest heaving up and down as the others tried to calm him down. 
“Who we killing?” A young girl came strolling into the kitchen, yawning as she did, stretching her arms above her head causing her shirt to slide up and expose her stomach slightly. 
“Morning cutie!” Asmo rushed to her side, wrapping an arm around her waist and pulling her with him back to the table. 
“Lucifer,” Belphine huffed, head resting upon the table with his eyes shut. 
“Oh I’m in,” Reaching for for a slice of toast only to out it back when she saw a scorpion tale sticking out of the crust, demon food was just not her thing, “He’s been on my case for the last few weeks, I’ve hardly had any time to myself because of his schedule he’s made for me,” her eyes fell to Satan who was still heaving in the corner, before swapping a confused expression with Levi. Levi touched the tip of his nose twice, a signal between the pair to indicate an argument had/or was about to break out. 
“You okay there?” She asked, standing up and walking a few paces forward to lean against the counter, meeting Satan’s eyes.
“Yeah just old bossy arse ruining everything as usual,” He murmured, breath finally calm. 
“Don’t let him wind you up, you know he gets a kick out of seeing you so angry,” She lent over and gave his hand a gentle squeeze. A shared look amongst the other brothers and just as Mammon was about to protest the gesture, Levithan stamped on his foot to keep him quiet. 
There was something between the two, they were the best of friends, anyone and everyone could see that. It was also apparent the sexual tension that lingered between the two, slowly building up, waiting to overspill. While they loved each other as friends, certain feelings now occurred between them. He found himself rather embarrassed at fisting his cock to the thought of her nearly every night, whilst she let his name slip between her lips when her vibrator brought her to orgasm. Neither knowing or revealing how they felt for the sake of their friendship. As if too blind to recognise their feelings for each other. 
“So what was he going off about this morning?” She asked the room, going into the fridge and opening a cartoon of juice.
“Beel’s eating,” Levithan answered, watching as Beel was still moping in the corner of the room, kicking fallen debris on the floor with a huff. 
“Oh Beelze! Are you stressing eating again?” She asked, feeling sympathy for him, she did for all of them. Whilst they were demons, they had no control over their powers, each of them having their own individual quirks tied to their demonic power. 
“Yeah-” sulked from the red-head in the corner, scuffing his shoes along the floor. 
“Oh honey I’m sorry,” She took a sip out of the juice she’d poured into a clean cup, “I mean he cancelled my date for me for the other day, deemed it as not suitable time for doing my studying,”. 
The word ‘date’ made Satan’s ears prick up, he repeated the word out loud causing everyone to turn to him. 
“Hmm, with one of them cutie players from Beel’s team I do believe,” Asmo chimed in, causing a blush to grace the girls cheeks, “Muscular, tall, handsome, a little dim-witted but still!”. 
A string of curses falls from Satan’s mouth, pushing his brothers off of him, a ‘this day keeps getting fucking better’ is all he says before storming out of the room. The slam of the door is enough to signal his departure.
“What's wrong with him?” She questioned, watching how the brothers all turned to each other before her slowly.
“Isn’t it obvious? He l-” Mammon rolls his eyes with a tut but if cut off by Asmo jumping in, “He’s just stressed sweetie, he’s probably gone up to the library so why don’t you go check on him for us,”. Asmo doesn’t take a second for her response, taking the cup out of her hand and putting it down, practically shoving her out of the room. “But Asmo I!” But her protests are cut short as he shuts the door behind her. 
“You!” He hisses to Mammon who blinks in confusion.
“The fuck did I do? I can’t help that Satan is too much of a pussy to admit his feelings, The Great Mammon would never chicken out like that,”. 
“Shut up Mammon, you big idiot, let Satan tell her himself,” Levi added, the group of remaining brothers now sitting around the table. 
“You think she likes him back?” Beel asked, causing a snort from Asmo.
“Please! Have you seen the way she stares at him? I also know about her little nightly activities,” He sings in a sickly song, wiggling his eyebrows.
“Asmo the fuck that’s gross. I didn’t need to know they were fucking, no wonder he was upset she had a date,” Belphegor sighed, throwing a book at him.
“Oh sweetie no, they’ve not done it yet, but the way she says his name at night she definitely wants too,”.
“How do you even know that?”.
“I’m the avatar of lust sweetie,” Asmo rolls his eyes and flicks his wrist with annoyance at even having to state it, “Plus her room is next to mine and the other night there was definitely a ‘Satan’ slipped between them plump lips after a good ten-minutes of buzzing,”.
“Stupid human, should be thinking of me instead of him,” Mammon pouted, crossing his arms infront him as he sulked like a child.
“Lets just hope the pair admit how they feel,” Levi stated, each of the brothers secretly praying for their brother.
-
Asmo’s hunch was right, the young girl found Satan angrily moping on the first floor of the library within the house. He had his back against the wooden bookcase sat on the floor, book in lap but little attention being paid to it.
“Hey,” She smiled, taking a few steps forward waiting for his response. He grunted, trying to bury himself in the book without looking up at her. 
“Can I sit?”.
“Free place isn’t it,” he responded, flicking the page as she sat beside him, outstretched legs touching.
They sat in for a few moments in silence, Satan still pretending to be studying in his book until he finally broke it.
“So your date,” Finally closing the book and turning to her, “You never said anything,”.
“I,” Her face was blushing slightly, “I wasn’t really into him but I just, I felt too nice to say no,”. Her confession made them both laugh, finally breaking the tension between them. “I’m sorry I didn’t tell you, I didn’t know how you would react,”.
“Me? What does it matter what I think?”.
“Your my best friend, your opinion matters the most,” She whispers, reaching between them and finding his hand, their fingers intertwining like they so often did especially when they sat in bed watching movies on a Friday night. 
It took a few minutes again before Satan spoke up, “I’m glad you didn’t go,”. 
“Oh?” He squeezed her hand, “I didn’t want to go anyway, I think he only asked because he overheard Beel teasing me about my lack of sex life,” she snorted a little with laughter, “I haven’t had sex since living in the human world. Do you know how long ago that was?” Her following words sending the demons cheeks a light pink, “I mean girls have needs too but I think Lucifer kind of got a whiff of why he wanted to take me out and well…”.
He knew exactly how many months to the exact day since she’d entered the Lamentation household, the shining ball of light that was herself weaving her way into all of their lives. How long it had been to even the touch of another person because the minute she walked into his life it lit a carnal desire which only she could quench. 
“I mean it’s been the same for me,” He laughs.
“Yeah, why is that? I always see the girls throwing themselves at you, you don’t seem that phased though?” She asked.
“I have my eye on someone else,” He rubbed his thumb over hers in a soft manner, “Someone important,”.
They shared a glance, the avatar of wrath melting into her eyes as if trying to convey the words he couldn't say out loud. Hers mirroring as she softly whispered his name.
“Satan I-...,” She started a waver in her voice, the feeling mutual for Satan to her. Both of them leaning until they were an eye-lash apart, her eyes slowly closing as Satan swallowed with an anxious breath.
It was unsure who started it but they kissed with a passion, tongues meeting in a sloppy exchange as he pulled her into his lap to straddle him. Both of them indulged in a feeling that this was right, that it was meant to be. Her hands fisted into his hair, both of them groaning as he held her waist whilst she subtly began to grind on him. 
“Is this okay?” He groaned against her lips as he unbuttoned her shirt slowly, just enough to cup her breast beneath it, thumbs tugging down the material of her bra to let her perky breasts stand free. His fingers rolling her nipples in his hands causing her to grind harder against him, heat pooling inside her stomach as her panties grew wetter by the second. 
“Yes, fuck Satan, yes don’t stop,” She replied, inhailing sharply as he pressed kisses to her neck, a trail leading down until his mouth latched over her breast.
Neither of them wanted to stop, losing themselves in the embrace of one another. When his hand skirted up her thigh beneath her skirt to rub against the wet patch of her panties she moaned his name, causing him to still for a second before continuing, the breathless way she called it made his cock stand solid. A moan he’d imagined thousands of times in his head, but none of it compared to the real thing. 
She popped the buttons on his jeans, palming his erection beneath the fabric before tugging them down just enough with his boxers for his cock to spring free. Her eyes widened as she marvelled at it, delicately taking it in the palm of her hand, his leaking head oozing wetness onto her skin.
“L-Like that,” Satan encouraged her after a few strokes, his mind fogging over at her movements. He let out a low groan against her neck, almost forgetting his brothers were only a few doors down from them and that they risked any one of the walking in.
“Your so wet,” His hand snaking beneath her panties to rub his index finger against her wet folds, her hips bucking into his touch. 
“Only for you Satan,” She whimpered, back arching as he pushed a finger inside her tight, wet heat. A noise like no-other left his lips as her walls gripped around his digit, pushing out to thrust back in until the grip around him loosened and he sped up his movements.
“Satan, I- more please,” Mouth formed in a delicate ‘o’ as two of his fingers worked inside her, knuckle deep and dripping with arousal. Her hand tightly around his cock, jerking it slowly as they both moaned into others mouths. Both eyes brimmed with lust, neither of them wanting to part from one and other for even a second.
“I can’t, I haven’t got a condom,” He swallowed, he may have one in his room but the thought of stopping now to search for one was too much. Even if she came around his fingers whilst he came on her hand, it would at least be enough to satisfy the need until they could be protected.
“I’m on,” Pressing her lips to his, pulling his hand away from her as she lined herself over his cock, “Birth control”. 
“Are you sure- fuck,” He groaned so loud the bookcase behind him trembled, the warmth of her tight walls stretching over him send his head flying backwards. It was unknown who said it, but a breathless ‘fuck’ was whispered as she lowered slowly over him. Satan pressing gentle kisses to her lips as she rose up before lowering back down, inching him further into her with each movement. She took a few moments to roll her hips slowly, Satan’s hands cupping her ass to help her down until he was fully hilted inside her. 
“Your perfect, so perfect,” Satan repeated over and over in a loving tone as her forehead pressed against his, slowly grinding movements of her hips in tightly circles.
Small rolls of her hips as his hands delicately ran from the curve of her ass to the smooth of her waist beneath her skirt, the sheer wetness dripping from her coaxing over his balls. The pit of her stomach dropped, mouth opening as her eyes expanded, the head of his cock rolling perfectly against her g-spot with each movement causing her to rock directly against that spot. His lips moved from hers, to her neck, moving down slightly to capture a nipple in his mouth as she moaned louder. He had to bite back the chuckle inside him to think for sure his brothers would hear them, but in this moment all he could focus on was the tight-heat eloping over his cock in gyrating movements. That he was claiming her, that she was his human. Heat pulsed throughout her body, feeling no shame at the sudden happening of her orgasm building. 
“I'm gonna cum, Satan, I’m gonna-” Fingernails digging into the soft green of his sweater, lips parted with her head tossed back, cheeks dusted a heavy pink as the tension in her stomach rose higher and higher. Satan’s hands working her hips, letting her move at her own leisure as she brought herself to the edge of climax, ready to be pushed off it. His vision was lost in her beauty, admiring each and every thing about her, how she was physically and mentally taking the air out of his lungs as her walls tightened over him. She came crying out his name, unable to hold back, an orgasm with the intensity like none before she had experienced. Drool pooled out the corner of her mouth, hands grasped onto him to keep her grounded as her vision flooded white, Satan’s hands pulling her down to be fully impaled by him as her walls flexed over his cock. Her sweating forehead pressed against his, rapid breaths falling from her mouth as she came down from her high, Satan groaning as her walls slowly stopped withering around him.
“Satan I lo-,” And just as she opened her eyes, they heard it from outside, Asmo screaming out.
“Lucifer don’t-!”. 
“I’m sorry,” Satan whispered, cupping her face as he pushed her off of him. The whimper she made broke his heart but he threw her behind the bookshelf, hissing at the loss of her warmth over him and the tightness of his jeans over his still solid cock as he tucked himself back in just in time for the door to burst open.
“Where is she!” Lucifer raging across the room in demon form, black onyx horns pulsing and Satan swore he could see steam coming out of his nose. “The fuck is she Satan!”.
“I don’t know what you mean, I’m simply revising as you told me too,” Satan peers up from the book he had tugged into his lap, although his sweating face, ruffled hair and panting breath was enough to signal otherwise. The heavy smell of sex dominating the aroma of the dusty library.
“Don’t play coy. I heard you down the corridor! You vile creature, tainting her like that, you should be ashamed Sata-” Lucifer barks but is stopped. Satan growled, anger controlling him as his demon form takes over, standing and pinning Lucifer against the wall as if he was merely as light as a book.
“I might not be good enough for her,” He hisses, tinged eyes pulsing with anger, “But I fucking love her, you have no right Lucifer, no right. Just because you can’t have her doesn’t mean you can control her,”. He gives a dark chuckle, smirking to see Lucifer's face fluster, “Yeah that’s what I thought, that's why you want us all to study, so you can keep her all to yourself but guess what Lucifer,”. Satan peered into his eyes before moving his hold to choke Lucifer, leaning to the side of him as he whispered in his ear, “It was my cock she was just coming over,” The whimper from Lucifer made Satan smug with pride, “So I suggest you leave, unless you want to watch me fuck her... Oh how glorious that would be seeing you wishing, no, lusting to be me as I drive my cock into her tight walls, for only me to ever do so. She’s my human Lucifer, mine,”. He lets out a cheshire grin before releasing his hold on Lucifer, watching him choke a few breaths before turning on his heels and leaving, his pride purely kicked to the curb. 
“Are you okay?” Satan turned back around, running to the bookcase where he had pushed the girl off, watching her nod from behind where she was hiding. “I’m so sorry you had to see that,” He whispered, cradling her in his arms, the moments of darkness passing as fast as it came, she bringing out the best and light out from inside him. 
“It’s okay, really I’m just glad he came in then and not when- you know, I guess we were a little loud,” She smiled with a light blush, her askew clothes now fixed, “Did you mean what you said… about Lucifer liking me…?”. 
“You can’t tell? That’s why he’s kept us all ‘studying’ to keep us away from you,” Satan tucking a strand of stray hair behind her ear.
“I didn’t realise, I mean- I didn’t realise you liked me either so,” A nervous laughter followed.
“All my brothers like you,” Cupping her face in his palm and stroking her lip with his thumb, “But I-... I love you,”. 
“I love you too Satan,” She whispered, wrapping her arms around his neck and pulling him down to kiss her. The kiss quickly deepend, the couple picking up their previous rhythm before being interrupted before she pulled away. Satan frightened he had scared her with his actions to Lucifer.
“Do you-” Twirling her index finger along his chest, hiding her face in a bashful manner, “Do you think that… we could finish this in your room?”.
“Of course, we don’t have to continue if you're uncomfortable,” Pressing a kiss to her forehead, a softer side of Satan that only she would ever get to see.
“No no!” She quickly added, “It’s just- Lucifer's room is next door and I- well, you know, I feel we owe him a little revenge for the past few weeks… and as well,”. Her hand moved down to cup his still hard cock through his jeans, “You didn’t get to finish,”.
“You are perfect, you know that?” Satan laughed, cupping her face again and kissing her nose.
“Um as well,”.
“Yes?”.
“Do you think the whole wall pinning and choking… we could-maybe-try it?” Her face burning red as Satan felt his cock twitch, the horns on the side of his that had subsided as his anger calmed now re-appeared but this time for lust, not anger. 
“Fuck me you’re so fucking perfect,” He mumbled once more, picking her up in his arms with her legs around his waist as he kissed her deeply, carrying her from the library to his room. 
Tumblr media
Kinktober masterlist here.
1K notes · View notes
andytfish · 4 years
Text
FREELANCE GUiDANCE: A 10 PART SERIES - #10 Make Excuses or Make Progress
We come to the end-- I hope the advice has given you some insight and guidance into the rigors of freelance.
I'm not going to recap what I've already written, you can do that on your own, rather I'd like to take a look at the obstacles you might be facing in your drive to succeed;  the biggest one might be YOU.
Are you the kind of person who often blames misfortune on someone else?  Are you the kind of person who feels the world is against them?  Are you the kind of person who looks at the success of others and feels a sense of jealousy or do you develop an attitude that they "just got the breaks"?
Successful people don't waste their time with blame.  They don't waste their time with jealousy.   Instead, they set goals, some realistic, some seemingly impossible, and they do everything in their power to achieve those goals.
Mistakes will happen.  Misfortunes will come.   Setbacks will occur.   As Richard Nixon said in his autobiography only when we've been to the lowest valley can we truly appreciate the view from the highest mountain.
You either make excuses or you make progress but you can't do both.   It's like attempting to win the Indianapolis 500 with your foot on the brake.  Not going to happen, not a chance.
If you fall into the category of the excuse maker...STOP.
Today.  This minute.  Make a pledge and stick with it.   Change the outlook and you'll change the outcome.  Do you know what your goals are?  If not, spend a minute to think about them and then set course to achieve.   Here are a few basics to help you get started:
GOALS:  Write them down.  I am not one who believes in making a big deal about my birthday.   When asked my age I often have to stammer for a few minutes because it's not something I ever think about- why? 
BECAUSE ORSON WELLES WROTE AND DIRECTED CITIZEN KANE WHEN HE WAS 25 YEARS OLD.
That right there is enough to make me hang up my creative tools and go back into the fine world of retail, where you work a LOT of hours but if you get into management you make real money (unless you're working at Wal*Mart-- NO one should work there).
So I don't think about it.   I don't set goals like "by thirty five I need a million dollars in the bank"-- rather I do it by more generic thought.   "in the next ten years I want to do XX" << maybe that's all psychological but it works.
Write down your goals and then create a road map of how to get there.  LINKEDIN is great for this, because through two connections on my own account you can be in touch with Jack Welsh.  Amazing what technology does.
MENTOR:  Find someone who's done what you want to do and try to learn from them.  Most highly successful people are willing to share insight and even time with those lower down on the ladder of success.  Most are willing to give a helping hand where they can, but they won't do the heavy lifting and it's easy to spot someone looking for an easy ride rather than someone who is willing to put in the effort.  If you're sincere it will come through.
My favorite gallery of all time was SPACE 242 in Boston.  I liked it better than many of the galleries I know in New York City and in Portland Oregon.  I liked the vibe.  I liked the themes.  I liked that it was near the MCA so I could double up my culture.   I liked that it was in SoBo which was experiencing a renewed art vibe.  Most of all I liked their Halloween Show-- because it was boss.
We attended probably two years worth of shows before we introduced ourselves to Gallery Director Ami Bennet who looked at our work and soon gave us a very successful solo show at the gallery.
NETWORK: Get thine ass out and mingle with real people.  It will benefit you both socially and professionally.   If you're an artist, get to openings and bring business cards with you which lead prospective gallery owners and other artists to your website.
This is separate from getting out and hanging with friends-- that's important too-- but networking is getting together with your peers and people with similar interests. 
TIME:  Time is our enemy if we don't plan for it.  Ignoring a deadline or a difficult task only puts it off until it's too late to effectively do something about it.  Time with planning is our biggest ally.
I had a routine that I stuck with pretty much Monday - Friday.  I would give a neighbor a ride to their job first thing in the morning, my dog in tow and we'd hit Elm Park for either a long walk or even a run around the park each and every morning.  Then I'd walk the two blocks over to Dunkin Donuts on Chandler Street for a coffee and one of their lower calorie breakfast sandwiches.  I was such a regular that the women who work there would know my coffee order as soon as I walked in.
I'd eat my breakfast sandwich and drink my coffee on my way back to the car and then be back in the studio by 10am with breakfast eaten, the dog walked and ready to work.  I liked this routine because it ensured that I would be up early (as a freelancer controlling my own work schedule early rising can be an issue) and it also got me out of the studio for two hours everyday.
The downside was I was spending $7 on every visit. 
The neighbor retired, and I kept up the routine for a few weeks but eventually I opted to make my own coffee and breakfast at home and walk my dog at the huge park near my home that was less than a half block away.  While Elm Park was full of people, my park was deserted save for the occasional other dog walker or the old guy who would hit golfballs into the woods.  It didn't have the same vibe, but it accomplished what I was looking for.
What I noticed was my spending money increased dramatically.  I never really put it together that the $7 I was spending daily (it's only $7 after all) really added up.  I decided everyday I'd drop the same $7 in a Lost In Space Cookie Jar I have in my studio.    Just like Dunkins.
I kept doing it and paid it little attention.  Spring came and went-- summer did too-- and before I knew it we were on the cusp of Halloween.
I decided to count the money in the Robot-- $1120.  ONE THOUSAND ONE HUNDRED AND TWENTY DOLLARS.
ONE THOUSAND ONE HUNDRED AND TWENTY DOLLARS on a breakfast and coffee I barely even liked???
Holy hannah.
It's time you see?  Time.
This works for projects, goals, money, relationships-- you name it.  Time used wisely is your best friend. That's it my friends, re-read the posts, make some notes and put some of this advice to work.  But most of all stop making, and stop accepting excuses.
You can do this.
Andy Fish is a freelance artist and writer who has been living the lifestyle longer than there has been an iPhone on this planet.  The advice given has worked for him, it might work for you, he hopes it does.  But like all advice, take it with your own situation in mind.  If you want to contact him shoot him an email [email protected]
2 notes · View notes
luked4nuke · 4 years
Text
If, I were President of the United States. (I just wanna state I’m not a democrat or republican)
First I’d enforce Quarantine and extend it. I’d also attend the poorest families or individuals first and provide them with the financial assistance they need. People are struggling hard enough as it is living paycheck to paycheck.
Second I’d shut down the schools as I believe safty more important especially for the future kids who will rule this place. I also don’t like how schools give so much homework and stress. They just condition kids into beleiving working 40 hours a week is normal and that you should be lucky to have weekends. Staying in classes all day then returning home only to be forced to complete more homework that takes up time and robs them of social interactions. These schools don’t even test knowledge. They test obedience and reward them for being quiet little slaves that will slowly become a “regular worker.” They really don’t care about how smart you are, they test memory over all else, when they study a subject and pass the test they move on quickly to the next one stressing them out. If they failed the test, to bad they’re still moving on with you. (Sorry this got way of topic. I just hate how schools operate and also how low they pay the teachers)
Third I would dismantle the police force and create a new one. A better one that focuses on real problems like sex trafficking and drugs. All the horrible crimes that are allowed to fly under the radar. Any excessive use of force would be heavily punished. Fired, fined and jail time. No shooting at peaceful protesters, seriously dafaq is wrong with them unleashing hell upon unarmed civilians and sneaking in rioters to escalate it to justify the force.
Fourth, gold is a finite resource. Pretty much all the money you’ve ever spent is fake, all digital backed by nothing. Personally I hate it but you’ve all becomes achstomed to it so I would attempt to fix the economy so people can afford essential things, like homes and food. Instead of kicking out homeless people Id build shelters. They make it to easy to fall down into poverty and nearly impossible to climb back up. Once you’ve been arrested, once you’ve been homeless, you understand the struggle of trying to reintergrate with society. The easiest path become the dark one. I would attempt to control the population, America is a gigantic habitat and likewise it has a carrying capacity. If you’re gonna argue people have to pay unreasonable amounts of money for food you’re crazy.
Immigrants are definitely allowed as long as they follow the rules and don’t commit crimes. America was litterally founded on immigrants. American stole land from the natives violently and even managed to capture Hawaii, which was its own nation. They taxed us and recognized us as a small power. Iolani Palace has electricity flush toilets and even phones before the White House did. Queen Lili’uokalani signed in duress. It horrible and sheforfeited her whole kingdom in exchange for the people, as a leader should. The people make a country, the government already should put the people first. Without all the hardworking Americans working, there is no country.
We don’t serve the government. As a government worker we serve the people. It’s our duty to ensure everybody is treated fairly. To make sure everybody that we oversee has the essentials for life, a home and food.
And for LGBT rights. I personally don’t care what the heck they do. Love is love, let it be. They can chose to identify as whoever they want and pursue relationships with whoever. You can’t force things onto people. America is supposed to be freedom personified, we can chose to do as we please as long as we don’t bring harm to others. Those camps are wrong. America is also religion free, you can be whatever you want, Christian Muslim, litterally anything. Being a satanist is totally legal as long as you don’t hurt anything. Believe in what you want and don’t force it on others. Gay people are amazing! We all are, were all human and we can change and create change. We are all human at the core and we always have been. We have a right to love, and to be loved by all around us. Love is love, let it be, theres always been love. I can identify as a man or woman, and I can damn well love either as I please as long it’s reciprocated. I’d always rather say I love you too much then not enough.
Climate change is real. The pollution of those stupidly large companies is also VERY real. As an individual you contribute less than a percent of the actual pollution, it’s literally the big corporations. That needs to stop. I’m not exactly sure how but I AM GOING to start a wave of change that will benefit the worlds health. We all live here. This is not political, I don’t have time for games, scientists that have studied their whole lives are begging for us to change. We can all have solar electricity farms and then it’d be FREE. “But you can’t charge people for that you can’t make money.” I’m NOT TRYING TO MAKE MONEY I DO NOT CARE ANOUT MONEY. IM AIMING FOR SOMETHING BIGGER THAN GREED THE BETTERMENT OF HUMANITY. I don’t care about ruining electric companies and other random fossil fuels bullshits that will run out, I want the future to be bright!
Screw it im going off the rails, schools main courses should focus on stuff like self sustainment, like farming and wilderness survival. Creativity because that’s the most human thing about us! Empathy basic Psychology. Kids can get mad they should learn and understand why. Understand why they feel the feelings they feel and giving them all better emotional control. EMPATHY. They need to learn things like taxes since they’re such a big part. Also why the heck are taxes so complicated. It’s just targeting the illiterate foreigners and immigrants who struggle and try to understand it and I believe that’s horrible. Make it easier to become apart of America the land of freedom and the getaway from the crueler areas of earth. Maybe just limit the population. Also seriously fuck off with taxes! Why the hell are you charging and taxing 14 year olds that aren’t allowed to vote, thats taxation without representation.
Taxes should be like Mario kart and Ancient Greece. Quote from some thing I googled
“The philosopher Aristotle developed the theme. His "magnificent man" gave vast sums to the community. But poor men could never be "magnificent" because they did not have the financial means. True wealth consists in doing good, Aristotle argued in the Art of Rhetoric: in handing out money and gifts, and helping others to maintain an existence.
The idea is simple the higher up you are on the financial ladder the more you have to pay taxes and contribute to society. The large taxes from the rich help fund financial aid for the poor and stuff. The rich did not earn that money they climbed to top on top a mountain of millions of shortcuts and underpaid workers It should be an honor to be taxed and help the poor people survive. Like in Mario kart, the higher you’re placed the harder it is to maintain it and the last place people always get the better power ups giving them a constant fighting chance. At most I believe wealth should be hoarded to sustain like one generation of kids, two at the most. Maybe three but theres no reason anybody should have all that money that your never going to spend or all that money that becomes worthless once a war or breaks out or aliens attack or something. Life is more important than money. Something simple everyone should consider.
I think everybody should be able to pursue a career and each career should be sustainable. Enjoyment in a job of your choosing without worrying about financial burden. Jobs would be divided into smaller simple groups and the pay would based on their contribution to society. Like doctors getting paid more and getting teachers paid more, but small retailers wouldn’t get paid as much but they could survive not living paycheck to paycheck. The motivation is everybody should free to pursue the hobby they love without being punished. Maybe little Timmy doesn’t want to be a firefighter, maybe he desires a simple fun life selling flowers. That’s fine! Maybe they don’t wanna become the hero but it’ll be an honor to society. As long as you have a job that contributes to society you can live for free. If everybody is constantly trying to make the most profit, then we all become a bucket of crabs dragging each other down. I can’t sell my $10 good that costed me $2 to make. Also the whole buy back thing irritates me, I spent $60 on this goddamn game and GameStop can only give me like $10 in store credit or $5 in real life? That’s isn’t fair and that applies to pretty much everything. That’s $1000 phone you bought is barley worth $357 right now. I’m pretty sure it didn’t cost that much to make these things but like DAMN. Capitalism sucks.
In summary, I don’t know much about politics but I would be the human party. I don’t care about left or right. I’m the one that doesn’t care about money. I care more about life and creativity. Peoples right to enjoyment and living a happy life with others regardless of gender. Survival of the human race and advancement into the future where more things are free and we can constantly focus on creating an even BETTER one. We can’t go anywhere without each other especially if we’re all just a bucket of crabs. To greedy and self destructive constantly looking out only for themselves. Seriously get your act together humans before you kickstart your own downfall. If we’re all trying to make a profit, nobody does. The best things in life are free. You can pursue wealth for your future or you can focus and live and enjoy and love the now. Mario kart style, where all in this race for life and we all deserve a winning chance.
1 note · View note
johnmauldin · 4 years
Text
The Post-Virus New Normal
I don’t think anyone believes we will go to back to anything like January 2020 normalcy anytime soon. We have no idea, even if restaurants and everything open, what shopping patterns will look like.
Are we learning to live on less in our isolation? Seeing your 401(k) become a 201(k) may postpone a car-buying decision or two.
My daughter (see below) works for a cheerleading gymnastics company. Nationwide this is a multi-hundred-million-dollar industry. Will they just open back up and expect all the girls go back on day one? Will their parents be able to afford it? We’re talking many tens of thousands of jobs. Personal trainers? Many jobs will be under pressure.
There are 47,000 retail stores just in the U.S. We already knew there were too many as closings were becoming more frequent. My friend Professor Michael Pettis in Beijing (who has lived there for 20+ years) has been documenting the return of life in Beijing. He sees people on the streets but not many in the shops, except where the young go to hang out rather than buy. Will that be the case in America and Europe?
Speaking of stores, many have already stopped paying their rent. Cheesecake Factory for one. Ryanair, EasyJet and British Airways have stopped paying most rents and vendors. Group 1 Automotive reports a 50% to 70% decline in March sales across its 428 dealerships in the U.S. and the UK. The company has laid off 3,000 U.S. employees and 2,800 UK employees.
How fast do we start traveling and vacationing again? That matters to hotels, airlines and their employees. I believe this experience will emotionally scar a generation. It is going to make the political divide even worse, especially along wealth and income lines.
The U.S. and other governments can artificially prop up GDP, but for how long? At some point, it really does start affecting the currency’s buying power. We just don’t know what that point is in the developed world.
The Federal Reserve has properly opened swap lines with many emerging markets, as they need dollars in order to pay bills and buy necessary supplies. But many of their citizens are going to want a “safer” fiat currency.
I think many emerging markets will enact capital controls sooner rather than later. That will really screw with their markets. But what else can they do?
Next month’s earnings season will be truly abysmal. Close to half of S&P 500 profits come from outside the U.S. Every business is going to have a new valuation.
It is just my guess, but I doubt we have seen the stock market bottom yet.
Let me close with some good news.
Even though the number of deaths is rising in the U.S. and the developed markets every day, the rate of increase is slowing in many places. I think we will see a giant collective sigh of relief when the infection and death rates are not only dropping but the drop is accelerating.
That will be the time to think about gradually getting back into the markets. Meanwhile, maintain your watchlist of things you want to buy at cheaper prices.
Let me leave you with this link to a letter from F. Scott Fitzgerald during his quarantine during the Spanish flu virus. It is a short read but poignant, except for these humorous few sentences:
“The officials have alerted us to ensure we have a month’s worth of necessities. Zelda and I have stocked up on red wine, whiskey, rum, vermouth, absinthe, white wine, sherry, gin, and lord, if we need it, brandy. Please pray for us.”
This quarantine is so very real, and some are rightfully very worried for their future. Will their jobs even exist? Others are seeing their work hours rise due to the “essential” nature of their jobs, but how long will that last?
I do know that we will adjust and that we will all Muddle Through. I will be with here with you.
We are all being forced to adjust to a New Normal. And one day -- although it seems far away -- it will become a Post-Virus New Normal. It will be one where friends and family will still be as important as ever.
The Great Reset: The Collapse of the Biggest Bubble in History
New York Times best seller and renowned financial expert John Mauldin predicts an unprecedented financial crisis that could be triggered in the next five years. Most investors seem completely unaware of the relentless pressure that’s building right now. Learn more here.
1 note · View note
epackingvietnam · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
#túi_giấy_epacking_việt_nam #túi_giấy_epacking #in_túi_giấy_giá_rẻ #in_túi_giấy #epackingvietnam #tuigiayepacking
0 notes
bfxenon · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
nutrifami · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
xaydungtruonggia · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
camerasieunhovn · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
ductrungnguyen87 · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
gamebazu · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
https://ift.tt/3ea8T85
0 notes
kjt-lawyers · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
noithatotoaz · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
thanhtuandoan89 · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
drummcarpentry · 4 years
Text
10 Steps to Blend STAT Ranking Data with Site Performance Metrics
Posted by AndrewMiller
Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.
We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.
I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.
I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.
I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.
Why validate SEO best practices with data?
It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.
Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.
Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.
We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.
Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.
Start by asking the right questions
Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:
What characteristics are shared by top-ranking local bank websites?
Who are banks actually competing against in the SERPs? Is it primarily other banks?
How do competitive SERPS change based on when/where/how users search?
How can smaller, local businesses gain an edge over larger competitors from outside their region?
How does SERP composition affect a bank’s ability to rank well for targeted keywords?
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.
Here’s an important reminder - be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.
Here’s how I analyzed 50,000 search results in a few hours
I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:
STAT to generated ranking reports for select keywords
Screaming Frog to crawl websites and gather technical SEO data
Power BI to analyze the large data sets and create simple visualizations
Step 1: Determine your data needs
I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.
I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.
Step 2: Identify your keywords
I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.
Step 3: Generate a STAT import file in .csv format
Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:
Project: The name of the new STAT project, or an existing project.
Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
Keyword: The search query you’re adding.
Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
Device: Select whether you would like Desktop or Smartphone results. I selected both.
Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.
Once your file is complete, you can import to STAT and begin tracking.
Step 4: Run a Top 20 Report in STAT for all keywords
STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.
Watch the video below to learn how to set up this report! 
My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.
Here’s an example of the raw output in CSV format:
It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.
Step 5: Clean up and normalize your STAT URLs data
At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.
Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.
Duplicate the Ranking URL column to a new column called Normalized URL.
Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.
The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.
Step 6: Categorize your SERP results by website type
Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.
This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:
Institution (banks and credit union websites)
Directory (aggregators, local business directories, etc.)
Reviews (local and national sites like Yelp.com)
Education (content about banks on .edu domains)
Government (content about banks on .gov domains and municipal sites)
Jobs (careers sites and job aggregators)
News (local and national news sites with banking content)
Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
Real Estate (commercial and residential real estate listings)
Search Engines (ranked content belonging to a search engine)
Social Media (ranked content on social media sites)
Other (completely random results not related to any of the above)
Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.
At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.
Step 7: Crawl your target websites with Screaming Frog
My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.
I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.
These settings ensure we’ll get all the info we need to answer our questions in one crawl:
List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.
Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.
Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.
APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.
Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.
Step 8: Export your Screaming Frog crawl data to Excel
Dumping your crawl data into Excel is remarkably easy.
Step 9: Join your data sets in Power BI
At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.
If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.
Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 
If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.
I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.
Step 10: Analyze and visualize!
Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.
Results — what did we learn?
Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!
Who are banks actually competing against in the SERPs? Is it primarily other banks?
On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.
Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.
How important are Core Web Vitals (CWV) for rankings? How does this change over time?
Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.
As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.
Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.
So what? What can we learn from this type of analysis?
Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.
Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).
Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.
By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes