#I can link this version but i think if you look on yandex its one of the first websites. second server. etc
Explore tagged Tumblr posts
lesbianraskolnikov · 1 month ago
Text
Tumblr media
marmeladov if he died today
This is more of an inside joke but it lives in my head. There's an upload of the 2024 adaption that has really obnoxious intrusive ads and theyre so funny. look
Tumblr media
It's a sports betting app but the icon just makes me think of a bitcoin. a really loud ad plays while hes laying on the ground dying. I really think people should watch the Olimpbet edition
22 notes · View notes
tyk-tyk-tyk · 5 years ago
Photo
Tumblr media Tumblr media Tumblr media
Achievement Theory - How’s Annie?
These are all of the achievements in Knock Knock. Most of the names are nothing special. Fun fact, Pagurian is literally a hermit crab, which I found to be an adorable add on. I plan on focusing on other achievements in different posts.
But what I came here for was specifically for the achievement “How’s Annie?”
Minor gore and lots of rambling under the cut.
Now right off the bat, I need to clarify that I was a pretty wimpy kid. Growing up I was (and still kind of am) a chicken, I never watched any horror shows or movies. Something I ended up missing was Twin Peaks, a TV horror series that gained a surprising amount of followers despite its cancellation. 
Initially, I thought maybe the achievement was a nod to a possible name for the girl in the woods, but this educated me otherwise. It was relayed to me that it was a direct reference to Twin Peaks’ ending, with a link to the final scene. The series is over 30 years old, but I’ll warn you of spoilers anyways.
I’ll let this article do most of the talking for me. You don’t need to read the entire thing, but there’s important information that I would never have known. “It had been a long season, filled with humour and flailing and complete misdirection, by the time Kyle McLachlan’s Agent Cooper returned to the Red Room, a kind of purgatory in the thick woods outside. “I’ll see you again in 25 years,” a spirit who looked like Laura Palmer (Sheryl Lee) told him in that episode, which aired 25 years ago today. Cooper had gone into the Red Room to save his love interest Annie (Heather Graham) from remaining in the Black Lodge – the show’s version of hell – indefinitely. He succeeded, but not before being pursued through the Red Room’s maze by a doppelganger.”
- Michelle Dean, and her article on Twink Peaks
In the woods, being pursued by a doppelganger... Now, where have we heard of that before? I think the most mindblowing (but probably mundane to most) thing about this is that Knock Knock potentially has American influence with this. Twink Peaks was an American show made by Americans in America, Knock Knock is a Russian game made by Russians in Russia, specifically Moscow. Of course, according to an article in a previous post, Ice Pick Lodge’s translator also happens to be one of their writers, so it isn’t too far-fetched to say that influence can come from anywhere.
There’s still a bit more to look into. I had to do some interesting digging for this. Despite changing my language on steam, for some reason Achievements stay locked in English. When hovering over them on the store page, however, I can see that there’s original Russian titles for them. Thankfully, the internet exists, and like in English, websites exist in Russian that list out Steam achievements. I specifically consulted gamer-info.com
Plugging these into Yandex was fascinating.
For the most part, the achievements stayed the same, or are relatively the same through rough translation. I’ll cover all of the changes in other posts. Something that did not change was “How’s Annie?” In both English and Russian, the achievement is called “How’s Annie?”
Tumblr media
This is outstanding to me. I’m sure a Russian dub of Twin Peaks exists, but how popular is it in Russia? How confused were Russian audiences of this reference, and how easy was it for them to dig for it?
As a final send off and reference, there exists concept art on Ice Pick Lodge’s Kickstarter for Knock Knock, showing The Lodger with grey hair. 
Tumblr media Tumblr media
Turns out the demon who possesses Cooper has grey hair as well.
Thanks for reading to the bottom, this was incredibly fun to dig for and I look forward to doing more in the future. Feel free to make commentary or add on if you have any more information. 
EDIT: Wow this is an OLD post I was so scatterbrained when making this. I made the title less misleading + updated a link. 
14 notes · View notes
endenogatai · 4 years ago
Text
Europe’s Android ‘choice’ screen keeps burying better options
It’s been over a year since Google begun auctioning slots for a search engine ‘choice’ screen on Android in Europe, following a major antitrust intervention by the European Commission back in 2018. But despite hitting Google with a record-breaking fine over two years ago almost nothing has changed.
The tech giant’s search marketshare remains undented and the most interesting regional search alternatives are being priced out of a Google-devised ‘remedy’ that favors those who can pay the most to be listed as an alternative to its own dominant search engine on smartphones running Google’s Android OS.
Quarterly choice screen winners have been getting increasingly same-y. Alternatives to Google are expecting another uninspiring batch of ‘winners’ to drop in short order.
The results for Q1 2021 were dominated by a bunch of ad-targeting search options few smartphone users would likely have heard of: Germany’s GMX; California-based info.com; and Puerto Rico-based PrivacyWall (which is owned by a company whose website is emblazoned with the slogan “100% programmatic advertising”) — plus another, more familiar (ad)tech giant’s search engine (Microsoft-owned) Bing.
Lower down the list: The Russian ‘Google’ — Yandex — which won eight slots. And a veteran player in the Czech search market, Seznam.cz, which bagged two.
On the ‘big loser’ side: Non-tracking search engine, DuckDuckGo — which has been standing up for online privacy for over a decade yet won only one slot (in Belgium). It’s been come to be almost entirely squeezed out vs winning a universal slot in all markets at the start of the auction process.
Tree-planting not-for-profit search engine, Ecosia, was almost entirely absent in the last round too: Gaining only one slot on the screen showed to Android users in Slovenia. Yet back in December Ecosia was added as a default search option with Safari on iOS, iPadOS and macOS — having grown its global usage to more than 15 million users.
While another homegrown European search option — which has a privacy-focus — France’s Qwant, went home with just one slot. And not in its home market, either (in tiny Luxembourg).
Google’s EU Android choice screen isn’t working say search rivals, calling for a joint process to devise a fair remedy
If Europe’s regulators had fondly imagined that a Google-devised ‘remedy’ for major antitrust breaches they identified would automagically restore thriving competition to the Android search market they should feel rudely awakened indeed. The bald fact is Google’s marketshare has not even been scratched, let alone dented.
Statista data for Google’s search market share on mobile (across both Android and iOS; the latter where the tech giant pays Apple billions of dollars annually to be set as the default on iPhones) shows that in February 2021 its share in Europe stood at 97.07% — up from 96.92% in July 2018 when the Commission made the antitrust ruling.
Yes, Google has actually gained share running this ‘remedy’.
By any measure that’s a spectacular failure for EU competition enforcement — more than 2.5 years after its headline grabbing antitrust decision against Android.
Google gets slapped with $5BN EU fine for Android antitrust abuse
The Commission has also been promoting a goal of European tech sovereignty throughout the period Google has been running this auction. President Ursula von der Leyen links this overrarching goal to her digital policy programming.
On the measure of tech sovereignty the Android choice screen must be seen as a sizeable failure too — as it’s not only failed to support (most) homegrown alternatives to Google (another, Cliqz, pulled the plug on its search+browser effort entirely last year, putting part of the blame on the region’s political stakeholders for failing to understand the need for Europe to own its own digital infrastructure) — but it’s actively burying the most interesting European alternatives by forcing them to compete against a bunch of ad-funded Google clones.
(And if Brave Search takes off it’ll be another non-European alternative — albeit, one that will have benefitted from expertise and tech that was made-in-Europe… )
Brave is launching its own search engine with the help of ex-Cliqz devs and tech
This is because the auction mechanism means only companies that pay Google the most can buy themselves a chance at being set as a default option on Android.
Even in the rare instances where European players shell out enough money to appear in the choice list (which likely means they’ll be losing money per search click) they most often do so alongside other non-European alternatives and Google — further raising the competitive bar for selection.
It doesn’t have to be this way. Nor was it wasn’t initially; Google started with a choice screen based on marketshare.
However it very quickly switched to a pay to play model — throttling at a stroke the discoverability of alternative business models that aren’t based on exploiting user data (or, indeed, aren’t profit-driven in Ecosia’s case; as it uses ad-generated revenue to fund tree planting with a purely environmental goal).
Such alternatives say they typically can’t afford to win Google’s choice screen auctions. (It’s worth noting that those who do participate in the game are restricted in what they can say as Google requires they sign an NDA.)
Clearly, it’s no coincidence that the winners of Google’s auction skew almost entirely to the track and target side of the tracks, where its own business sits; all data-exploiting business models bandied together. And then, from a consumer point of view, why would you not pick Google with such a poorly and artificially limited ‘choice’ on offer — since you’re generally only being offered weaker versions of the same thing?
Ecosia tells TechCrunch it’s now considering pulling out of the auction process altogether — which would be a return to its first instinct; which was to boycott the auction before saying it felt it had to participate. A few months playing Google’s pay-to-play ‘no choice’ (as Ecosia dubs the auction) game has cemented its view that the system is stacked against genuine alternatives.  
Google’s ‘no choice’ screen on Android isn’t working, says Ecosia — querying the EU’s approach to antitrust enforcement
Over two auction rounds when Ecosia has only ended up winning the one slot each time it says it’s seen no positive effect on user numbers. A decision on whether or not to withdraw entirely will be taken after the results of the next auction process are revealed, it said. (The next round of results are expected shortly, in early March.)
“We definitely realized it’s less and less ‘fun’ to play the game,” Ecosia founder Christian Kroll told us. “It’s a super unfair game — where it’s not only ‘David against Goliath’ but also Goliath gets to choose the rules, gets a free ticket, he can change the rules of game if he likes to. So it’s not amusing for us to participate in that.
“We’ve been participating now for nine months and if you look at overall marketshare in Europe nothing has changed. We don’t know the results yet of this round but I assume also nothing will change — the usual suspects will be there again… Most of the options that you see there now are not interesting to users.”
“Calling it a ‘choice’ screen is still a little bit ironic if you remove all the interesting choices from the screen. So the situation is still the same and it becomes less and less fun to play the game and at some point I think we might make the decision that we’re not going to be part of the game anymore,” he added.
Other alternative search engines we spoke to are continuing to participate for now — but all were critical of Google’s ‘pay-to-play’ model for the Android ‘choice screen’.
DuckDuckGo founder, Gabriel Weinberg, told us: “We are bidding, but only to help further expose to the European Commission how flawed Google’s rigged process really is, in hopes they will help more actively take a role in reforming it into something that actually works for consumers. Due to our strict privacy policy, we expect to be eliminated, same as last time.”
He pointed to a blog post the company put out last fall, denouncing the “fundamentally flawed” auction model — and saying that “whole piece still stands”. In the blog post the company wrote that despite being profitable since 2014 “we have been priced out of this auction because we choose to not maximize our profits by exploiting our users”.
“In practical terms, this means our commitment to privacy and a cleaner search experience translates into less money per search. This means we must bid less relative to other, profit-maximizing companies,” DuckDuckGo went on, adding: “This EU antitrust remedy is only serving to further strengthen Google’s dominance in mobile search by boxing out alternative search engines that consumers want to use and, for those search engines that remain, taking most of their profits from the preference menu.”
“This auction format incentivizes bidders to bid what they can expect to profit per user selection. The long-term result is that the participating Google alternatives must give most of their preference menu profits to Google! Google’s auction further incentivizes search engines to be worse on privacy, to increase ads, and to not donate to good causes, because, if they do those things, then they could afford to bid higher,” it also said then.
France’s Qwant has been similarly critical and it told us it is “extremely dissatisfied” with the auction — calling for “urgent modification” and saying the 2018 Commissio decision should be fully respected “in text and in spirit”.
“We are extremely dissatisfied with the auction system. We are asking for an urgent modification of the Choice Screen to allow consumers to find the search engine they want to use and not just the three choices that are only the ones that pay the most Google. We demand full respect for the 2018 decision, in text and in spirit,” said CEO Jean-Claude Ghinozzi.
“We are reviewing all options and re-evaluating our decision on a quarterly basis. In any case, we want consumers to be able to freely choose the search engine they prefer, without being limited to the only three alternative choices sold by Google. Consumers’ interests must always come first,” he added.
Russia’s Yandex confirmed it has participated in the upcoming Q2 auction. But it was also critical of Google’s implementation, saying it falls short of offering a genuine “freedom of choice” to Android users.
“We aim to offer high-quality and convenient search engine around the world. We are confident that freedom to select a search engine will lead to greater market competition and motivate each player to improve services. We don’t think that the current EU solution fully ensures freedom of choice for users, by only covering devices released from March 2020,” a Yandex spokeswoman said.
“There are currently very few such devices on the EU market in comparison with the total number of devices in users’ hands. It is essential to provide the freedom of choice that is genuine and real. Competition among service providers ultimately benefits users who will receive a better product.”
One newcomer to the search space — the anti-tracking browser Brave (which, as we mentioned above, just bought up some Cliqz assets to underpin the forthcoming launch of an-own brand Brave Search) — confirmed it will not be joining in at all.
“Brave does not plan to participate in this auction. Brave is about putting the user first, and this bidding process ignores users’ best interests by limiting their choices and selecting only for highest Google Play Store optimizing bidders,” a spokeswoman said.
“An irony here is that Google gets to profit off its own remedy for being found guilty of anti-competitive tying of Chrome into Android,” she added.
Asked about its strategy to grow usage of Brave Search in the region — outside of participation in the Android choice screen — she said: “Brave already has localized browsers for the European market, and we will continue to grow by offering best-in-class privacy showcased in marketing campaigns and referrals programs.”
Google’s self-devised ‘remedy’ followed a 2018 antitrust decision by the Commission — which led to a record-breaking $5BN penalty and an order to cease a variety of infringing behaviors. The tech giant’s implementation remains under active monitoring by EU antitrust regulators. However Kroll argues the Commission is essentially just letting Google buy time rather than fix the abusive behavior it identified.
“The way I see this at the moment is the Commission feels like the auction screen isn’t necessarily something that they’ve requested as a remedy so they can’t really force Google to change it — and that’s why they also maybe don’t see it as their responsibility,” he said. “But at the same time they requested Google to solve the situation and Google isn’t doing anything.
“I think they are also allowing Google to get the credit from the press and also from users that it seems like Google is doing something — so they are allowing Google to play on time… I don’t know if a real choice screen would be a good solution but it’s also not for me to decide — it’s up to the European Commission to decide if Google has successfully remedied the damage… and has also compensated some of the damage that it’s done and I think that has not happened at all. We can see that in the [marketshare] numbers that basically still the same situation is happening.”
“The whole thing is designed to remove interesting options from the screen,” he also argued of Google’s current ‘remedy’. “This is how it’s ‘working’ and I’m of course disappointed that nobody is stepping in there. So we’re basically in this unfair game where we get beaten up by our competitors. And I would hope for some regulator to step in and say this is not how this should go. But this isn’t happening.
“At the moment our only choice is to hang in there but at the moment if we really see there is no effect and there’s also no chance that regulators will ever step in we still have the choice to completely withdraw and let Google have its fun but without us… We’re not only not getting anything out of the [current auction model] but we’re of course also investing into it. And there are also restrictions because of the NDA we’ve signed — and even those restrictions are a little bit of a pain. So we have all the negative effects and don’t get any benefits.”
While limited by NDA in what he can discuss about the costs involved with participating in the auction, Kroll suggested the winners are doing so at a loss — pursuing reach at the expense of revenue.
“If you look at the bids from the last rounds I think with those bids it would be difficult for us to make money — and so potentially others have lost money. And that’s exactly also how this auction is designed, or how most auctions are designed, is that the winners often lose money… so you have this winner’s curse where people overbid,” he said.
“This hasn’t happened to us — also because we’re super careful — and in the last round we won this wonderful slot in Slovenia. Which is a beautiful country but again it has no impact on our revenues and we didn’t expect that to happen. It’s just for us to basically participate in the game but not risk our financial health,” he added. “We know that our bids will likely not win so the financial risk [to Ecosia as it’s currently participating and mostly losing in the auction] is not that big but for the companies who actually win bids — for them it might be a different thing.”
Kroll points out that the auction model has allowed Google to continue harvesting marketshare while weakening its competitors.
“There are quite a few companies who can afford to lose money in search because they just need to build up marketshare — and Google is basically harvesting all that and at the same time weakening its competitors,” he argued. “Because competitors need to spend on this. And one element that — at least in the beginning when the auction started — that I didn’t even see was also that if you’re a real search company… then you’re building up a brand, you’re building up a product, you’re making all these investments and you have real users — and if you have those then, if there was really a choice screen, people would naturally choose you. But in this auction screen model you’re basically paying for users that you would have anyway.
“So it’s really putting those kind of companies at a disadvantage: DuckDuckGo, us, all kinds of companies who have a ‘real USP’. Also Lilo, potentially even Qwant as well if you have a more nationalist approach to search, basically. So all of those companies are put at an even bigger disadvantage. And that’s — I think — unfair.”
Since most winners of auction slots are, like Google, involved in surveillance capitalism — gathering data on search users to profit off of ad targeting — if anyone was banking on EU competition enforcement being able to act as a lever to crack open the dominant privacy-hostile business model of the web (and allow less abusive alternatives get a foot in the door) they must be sorely disappointed.
Better alternatives — that do not track consumers for ads; or, in the case of Ecosia, are on an entirely non-profit mission — are clearly being squeezed out.
The Commission can’t say it wasn’t warned: The moment the auction model was announced by Google rivals decried it as flawed, rigged, unfair and unsustainable — warning it would put them at a competitive disadvantage (exactly because they aren’t just cloning Google’s ‘track and target for ad profit model’).
Nonetheless, the Commission has so far shown itself unwilling or unable to respond — despite making a big show of proposing major new rules for the largest platforms which it says are needed to ensure they play fair. But that raises the question of why it’s not better-enforcing existing EU rules against tech giants like Google?
When we raised criticism of Google’s Android choice screen auction model with the Commission it sent us its standard set of talking points — writing that: “We have seen in the past that a choice screen can be an effective way to promote user choice”.
“The choice screen means that additional search providers are presented to users on start-up of every new Android device in every EEA country. So users can now choose their search provider of preference when setting up their newly purchased Android devices,” it also said, adding that it is “committed to a full and effective implementation of the decision”.
“We are therefore monitoring closely the implementation of the choice screen mechanism,” it added — a standard line since Google begin its ‘compliance’ with the 2018 EU decision. 
In a slight development, the Commission did also confirm it has had discussions with Google about the choice screen mechanism — following what it described as “relevant feedback from the market”. 
It said these discussions focused on “the presentation and mechanics of the choice screen and to the selection mechanism of rival search providers”.
But with the clock ticking, and genuine alternatives to Google search being actively squeezed out of the market — leaving European consumers to be offered no meaningful choice to privacy-hostile search on Android — you do have to wonder what regulators are waiting for?
A pattern of reluctance to challenge tech giants where it counts seems to be emerging from Margrethe Vestager’s tenure at the helm of the competition department (and also, since 2019, a key shaper of EU digital policy).
Despite gaining a reputation for being willing to take on tech giants — and hitting Google (and others) with a number of headline-grabbing fines over the past five+ years — she cannot claim success in rebalancing the market for mobile search nor smartphone operating systems nor search ad brokering, in just the most recent Google cases.
Nonetheless she was content to green light Google’s acquisition of wearable maker Fitbit at the end of last year — despite a multitude of voices raised against allowing the tech giant to further entrench its dominance.
On that she argued defensively that concessions secured from Google would be sufficient to address concerns (such as a promise extracted from Google not to  use Fitbit data for ads for at least ten years).
But, given her record on monitoring Google’s compliance with a whole flush of EU antitrust rulings, it’s hard to see why anyone other than Google should be confident in the Commission’s ability or willingness to enforce its own mandates against Google. Complaints against how Google operates, meanwhile, just keep stacking up.
“I think they are listening,” says Kroll of the Commission. “But what I am missing is action.”
Google’s EU Android choice screen isn’t working say search rivals, calling for a joint process to devise a fair remedy
Travel startups cry foul over what Google’s doing with their data
Act now before Google kills us, 135-strong coalition of startups warns EU antitrust chief
  from RSSMix.com Mix ID 8204425 https://ift.tt/3ehnJcV via IFTTT
0 notes
thelmasirby32 · 5 years ago
Text
Subscription Fatigue
Subscription Management
I have active subscriptions with about a half-dozen different news & finance sites along with about a half dozen software tools, but sometimes using a VPN or web proxy across different web browsers makes logging in to all of them & clearing cookies for some paywall sites a real pain.
If you don't subscribe to any outlets then subscribing to an aggregator like Apple News+ can make a lot of sense, but it is very easy to end up with dozens of forgotten subscriptions.
Subscription fatigue is turning into subscription stress. Something alarming, guilt inducing about having 40+ reoccurring charges each month. Financial death by a thousand cuts.— Tom Goodwin (@tomfgoodwin) January 28, 2020
Winner-take-most Market Stratification
The news business is coming to resemble other tech-enabled businesses where a winner takes most. The New York Times stock, for instance, is trading at 15 year highs & they recently announced they are raising subscription prices:
The New York Times is raising the price of its digital subscription for the first time, from $15 every four weeks to $17 — from about $195 to $221 a year.
With a Trump re-election all but assured after the Russsia, Russia, Russia garbage, the party-line impeachment (less private equity plunderer Mitt Romney) & the ridiculous Iowa primary, many NYT readers will pledge their #NeverTrumpTwice dollars with the New York Times.
If you think politics looks ridiculous today, wait until you see some of the China-related ads in a half-year as the novel coronavirus spreads around the world.
Outside of a few core winners, the news business online has been so brutal that even Warren Buffett is now a seller. As the economics get uglier news sites get more extreme with ad placements, user data sales, and pushing subscriptions. Some of these aggressive monetization efforts make otherwise respectable news outlets look like part of a very downmarket subset of the web.
Users Fight Back
Users have thus adopted to blocking ads & are also starting to ramp up blocking paywall notifications.
Some of the most popular browser extensions are ad blockers & tracking blockers like Adblock Plus, Ghostery & Privacy Badger.
Apple has made tracking their users across sites harder with their Intelligent Tracking Prevention, causing iPhone ad rates to plummet: "The allure of a Safari user in an auction has plummeted," Rubicon Project CEO Michael Barrett told the publication. "There's no easy ability to ID a user."
The Opera web browser comes with an ad blocker baked in.
Mozilla is also pushing to protect user privacy in Firefox.
Google recently announced they will stop supporting third party cookies in Chrome in the next couple years. Those who invested into adopting AMP will have to invest into making yet more technical changes to manage paywalls on AMP pages.
Each additional layer of technological complexity is another cost center publishers have to fund, often through making the user experience of their sites worse, which in turn makes their own sites less differentiated & inferior to the copies they have left across the web (via AMP, via Facebook Instant Articles, syndication in Apple News or on various portal sites like MSN or Yahoo!).
A Web Browser For Every Season
Google Chrome is spyware, so I won't recommend installing that.
Not good enough for you? Not a direct enough corollary? How about this?Also out today: https://t.co/6dUWCCEyii Google has a backdoor to track individual Chrome users by installation ID.Even GG's denial admits pieces of the same complaints y'all had about Jumpshot last week! pic.twitter.com/Km2mQfOgbJ— Rand Fishkin (@randfish) February 4, 2020
Here Google's official guide on how to remove the spyware.
The easiest & most basic solution which works across many sites using metered paywalls is to have multiple web browsers installed on your computer. Have a couple browsers which are used exclusively for reading news articles when they won't show up in your main browser & set those web browsers to delete cookies on close. Or open the browsers in private mode and search for the URL of the page from Google to see if that allows access.
If you like Firefox there are other iterations from other players like Pale Moon, Comodo IceDragon or Waterfox using their core.
If you like Google Chrome then Chromium is the parallel version of it without the spyware baked in. The Chromium project is also the underlying source used to build about a dozen other web browsers including: Opera, Vivaldi, Brave, Cilqz, Blisk, Comodo Dragon, SRWare Iron, Yandex Browser & many others. Even Microsoft recently switched their Edge browser to being powered by the Chromium project. The browsers based on the Chromium store allow you to install extensions from the Chrome web store.
Some web browsers monetize users by setting affiliate links on the home screen and/or by selling the default search engine recommendation. You can change those once and they'll typically stick with whatever settings you use.
For some browsers I use for regular day to day web use I set them up to continue session on restart, and I have a session manager plugin like this one for Firefox or this one for Chromium-based browsers. For browsers which are used exclusively for reading paywall blocked articles I set them up to clear cookies on restart.
Bypassing Paywalls
There are a couple solid web browser plugins built specifically for bypassing paywalls.
Academic Journals
Unpaywall is an open database of around 25,000,000 free scholarly articles. They provide extensions for Firefox and Chromium based web browsers on their website.
News Articles
There is also one for news publications called bypass paywalls.
Mozilla Firefox: To install the Firefox version go here.
Chrome-like web browsers: To install the Chrome version of the extension in Opera or Chromium or Microsoft Edge you can download the extension here, enter developer mode inside the extensions area of your web browser & install extension. To turn developer mode on, open up the drop down menu for the browser, click on extensions to go to the extension management area, and then slide the "Developer mode" button to the right so it is blue.
Regional Blocking
If you travel internationally some websites like YouTube or Twitter or news sites will have portions of their content restricted to only showing in some geographic regions. This can be especially true for new sports content and some music.
These can be bypassed by using a VPN service like NordVPN, ExpressVPN, Witopia or IPVanish. Some VPN providers also sell pre-configured routers. If you buy a pre-configured router you can use an ethernet switch or wifi to switch back and forth between the regular router and the VPN router.
You can also buy web proxies & enter them into the Foxy Proxy web browser extension (Firefox or Chromium-compatible) with different browsers set to default to different country locations, making it easier to see what the search results show in different countries & cities quickly.
If you use a variety of web proxies you can configure some of them to work automatically in an open source rank tracking tool like Serposcope.
The Future of Journalism
I think the future of news is going to be a lot more sites like Ben Thompson's Stratechery or Jessica Lessin's TheInformation & far fewer broad/horizontal news organizations. Things are moving toward the 1,000 true fans or perhaps 100 true fans model:
This represents a move away from the traditional donation model—in which users pay to benefit the creator—to a value model, in which users are willing to pay more for something that benefits themselves. What was traditionally dubbed “self-help” now exists under the umbrella of “wellness.” People are willing to pay more for exclusive, ROI-positive services that are constructive in their lives, whether it’s related to health, finances, education, or work. In the offline world, people are accustomed to hiring experts across verticals
A friend of mine named Terry Godier launched a conversion-oriented email newsletter named Conversion Gold which has done quite well right out of the gate, leading him to launch IndieMailer, a community for paid newsletter creators.
The model which seems to be working well for those sorts of news sites is...
stick to a tight topic range
publish regularly at a somewhat decent frequency like daily or weekly, though have a strong preference to quality & originality over quantity
have a single author or a small core team which does most the writing and expand editorial hiring slowly
offer original insights & much more depth of coverage than you would typically find in the mainstream news
Rely on Wordpress or a low-cost CMS & billing technology partner like Substack, Memberful, sell on a marketplace like Udemy, Podia or Teachable, or if they have a bit more technical chops they can install aMember on their own server. One of the biggest mistakes I made when I opened up a membership site about a decade back was hand rolling custom code for memberhsip management. At one point we shut down the membership site for a while in order to allow us to rip out all that custom code & replace it with aMember.
Accept user comments on pieces or integrate a user forum using something like Discord on a subdomain or a custom Slack channel. Highlight or feature the best comments. Update readers to new features via email.
Invest much more into obtaining unique data & sources to deliver new insights without spending aggressively to syndicate onto other platforms using graphical content layouts which would require significant design, maintenance & updating expenses
Heavily differentiate your perspective from other sources
maintain a low technological maintenance overhead
low cost monthly subscription with a solid discount for annual pre-payment
instead of using a metered paywall, set some content to require payment to read & periodically publish full-feature free content (perhaps weekly) to keep up awareness of the offering in the broader public to help offset churn.
Some also work across multiple formats with complimentary offerings. The Ringer has done well with podcasts & Stratechery also has the Exponent podcast.
There are a number of other successful online-only news subscription sites like TheAthletic & Bill Bishop's Sinocism newsletter about China, but I haven't subscribed to them yet. Many people support a wide range of projects on platforms like Patreon & sites like MasterClass with an all-you-can-eat subscription will also make paying for online content far more common..
Categories: 
publishing & media
from Digital Marketing News http://www.seobook.com/bypass-paywall
1 note · View note
seocompanysurrey · 7 years ago
Text
Crawl efficiency: making Google’s crawl easier
Search engines crawl your site to get the contents into their index. The bigger your site gets, the longer this crawl takes. It’s important that the time spent crawling your site is well spent. If your site has a 1,000 pages or less, this is not a topic you’ll need to think about much. If  you intend on growing your site though, keep on reading. Acquiring some good habits early on can save you from huge headaches later on. In this article, we’ll cover what crawl efficiency is and what you can do about it.
All search engines crawl the same way. In this article, we’ll refer to Google and Googlebot.
How does a crawl of your site work?
Google finds a link to your site somewhere on the web. At that point, that URL is the beginning of a virtual pile. The process is pretty easy after that:
Googlebot takes one page from that pile;
it crawls the page and indexes all the contents for use in Google;
it then adds all the links on that page to the pile.
During the crawl, Googlebot might encounter a redirect. The URL it’s redirected to goes on the pile.
Your primary goal is to make sure Googlebot can get to all pages on the site. A secondary goal is to make sure new and updated content gets crawled fast. Good site architecture will help you reach that goal. It is imperative though that you maintain your site well.
Crawl depth
An important concept while talking about crawling is the concept of crawl depth. Say you had 1 link, from 1 site to 1 page on your site. This page linked to another, to another, to another, etc. Googlebot will keep crawling for a while. At some point though, it’ll decide it’s no longer necessary to keep crawling. When that point is, depends on how important the link pointing at that first page is.
This might seem theoretical, so let’s look at a practical example. If you have 10,000 posts, all in the same category and you show 10 articles per page. These pages only link to “Next »” and “« Previous”. Google would need to crawl 1,000 pages deep to get to the first of those 10,000 posts. On most sites, it won’t do that.
This is why it’s important to:
Use categories / tags and other taxonomies for more granular segmentation. Don’t go overboard on them either. As a rule of thumb, a tag is only useful when it connects more than 3 pieces of content. Also, make sure to optimize those category archives.
Link to deeper pages with numbers, so Googlebot can get there quicker. Say you link page 1 to 10 on page 1 and keep doing that. In the example above, the deepest page would only be 100 clicks away from the homepage.
Keep your site fast. The slower your site, the longer a crawl will take.
XML Sitemaps and crawl efficiency
Your site should have one or more XML sitemaps. Those XML sitemaps tell Google which URLs exist on your site. A good XML sitemap also indicates when you’ve last updated a particular URL. Most search engines will crawl URLs in your XML sitemap more often than others.
In Google Search Console, XML sitemaps give you an added benefit. For every sitemap, Google will show you errors and warnings. You can use this by making different XML sitemaps for different types of URLs. This means you can see what types of URLs on your site have the most issues. Our Yoast SEO plugin does this for you automatically.
Problems that cause bad crawl efficiency
Many 404s and other errors
While it crawls your site, Google will encounter errors. It’ll usually just pick the next page from the pile when it does. If you have a lot of errors on your site during a crawl, Googlebot will slow down. It does that because it’s afraid that it’s causing the errors by crawling too fast. To prevent Googlebot from slowing down, you thus want to fix as much errors as you can.
Google reports all those errors to you in its Webmaster Tools, as do Bing and Yandex. We’ve covered errors in Google Search Console (GSC) and Bing Webmaster Tools before. If you have our Yoast SEO Premium plugin, you can import and fix the errors from GSC with it. You can do that straight from your WordPress admin.
You wouldn’t be the first client we see that has 3,000 actual URLs and 20,000 errors in GSC. Don’t let your site become that site. Fix those errors on a regular basis, at least every month.
Excessive 301 redirects
I was recently consulting on a site that had just done a domain migration. The site is big, so I used one of our tools to run a full crawl of the site and see what we should fix. It became clear we had one big issue. A large group of URLs on this site is always linked to without a trailing slash. If you go to such a URL without the trailing slash, you’re 301 redirected. You’re redirected to the version with the trailing slash.
If that’s an issue for one or two URLs on your site it doesn’t really matter. It’s actually often an issue with homepages. If that’s an issue with 250,000 URLs on your site, it becomes a bigger issue. Googlebot doesn’t have to crawl 250,000 URLs but 500,000. That’s not exactly efficient.
This is why you should always try to update links within your site when you change URLs. If you don’t, you’ll get more and more 301 redirects over time. This will slow down your crawl and your users. Most systems take up to a second to server a redirect. That adds another second onto your page load time.
Spider traps
If your site is somewhat more authoritative in Google’s eyes, fun things can happen. Even when it’s clear that a link doesn’t make sense, Google will crawl it. Give Google the virtual equivalent of an infinite spiral staircase, it’ll keep going. I want to share a hilarious example of this I encountered at the Guardian.
At the Guardian we used to have daily archives for all our main categories. As the Guardian publishes a lot of content, those daily archives make sense. You could click back from today, to yesterday and so on. And on. And on. Even to long before the Guardian’s existence. You could get to December 25th of the year 0 if you were so inclined. We’ve seen Google index back to the year 1,600. That’s almost 150,000 clicks deep.
Improve your technical SEO skills!
Which technical SEO errors hurt your site?
Solve them and climb the rankings!
Improve your site speed on the go
On-demand SEO training by Yoast
$199 - Buy now ▸
More info
This is what we call a “spider trap“. Traps like these can make a search engines crawl extremely inefficient. Fixing them almost always leads to better results in organic search. The bigger your site gets, the harder issues like these are to find. This is true even for experienced SEOs.
Tools to find issues and improve crawl efficiency
If you’re intrigued by this and want to test your own site, you’re going to need some tools. We used Screaming Frog a lot during our site reviews. It’s the Swiss army knife of most SEOs. Some other SEOs I know swear by Xenu, which is also pretty good (and free). Be aware: these are not “simple” tools. They are power tools that can even take down a site when used wrong, so take care.
A good first step is to start crawling a site and filter for HTML pages. Then sort descending by HTTP status code. You’ll see 500 – 400 – 300 type responses on the top of the list. You’ll be able to see how bad your site is doing, compared to the total number of URLs. See an example below:
How’s your sites crawl efficiency?
I’d love to hear if you’ve had particular issues like these with crawl efficiency and how you solved them. Even better if this post helped you fix something, come tell us below!
Read more: robots.txt: the ultimate guide »
The post Crawl efficiency: making Google’s crawl easier appeared first on Yoast.
from Yoast • SEO for everyone https://yoast.com/crawl-efficiency/
0 notes
mccullytech · 7 years ago
Text
Crawl efficiency: making Google’s crawl easier
Search engines crawl your site to get the contents into their index. The bigger your site gets, the longer this crawl takes. It’s important that the time spent crawling your site is well spent. If your site has a 1,000 pages or less, this is not a topic you’ll need to think about much. If  you intend on growing your site though, keep on reading. Acquiring some good habits early on can save you from huge headaches later on. In this article, we’ll cover what crawl efficiency is and what you can do about it.
All search engines crawl the same way. In this article, we’ll refer to Google and Googlebot.
How does a crawl of your site work?
Google finds a link to your site somewhere on the web. At that point, that URL is the beginning of a virtual pile. The process is pretty easy after that:
Googlebot takes one page from that pile;
it crawls the page and indexes all the contents for use in Google;
it then adds all the links on that page to the pile.
During the crawl, Googlebot might encounter a redirect. The URL it’s redirected to goes on the pile.
Your primary goal is to make sure Googlebot can get to all pages on the site. A secondary goal is to make sure new and updated content gets crawled fast. Good site architecture will help you reach that goal. It is imperative though that you maintain your site well.
Crawl depth
An important concept while talking about crawling is the concept of crawl depth. Say you had 1 link, from 1 site to 1 page on your site. This page linked to another, to another, to another, etc. Googlebot will keep crawling for a while. At some point though, it’ll decide it’s no longer necessary to keep crawling. When that point is, depends on how important the link pointing at that first page is.
This might seem theoretical, so let’s look at a practical example. If you have 10,000 posts, all in the same category and you show 10 articles per page. These pages only link to “Next »” and “« Previous”. Google would need to crawl 1,000 pages deep to get to the first of those 10,000 posts. On most sites, it won’t do that.
This is why it’s important to:
Use categories / tags and other taxonomies for more granular segmentation. Don’t go overboard on them either. As a rule of thumb, a tag is only useful when it connects more than 3 pieces of content. Also, make sure to optimize those category archives.
Link to deeper pages with numbers, so Googlebot can get there quicker. Say you link page 1 to 10 on page 1 and keep doing that. In the example above, the deepest page would only be 100 clicks away from the homepage.
Keep your site fast. The slower your site, the longer a crawl will take.
XML Sitemaps and crawl efficiency
Your site should have one or more XML sitemaps. Those XML sitemaps tell Google which URLs exist on your site. A good XML sitemap also indicates when you’ve last updated a particular URL. Most search engines will crawl URLs in your XML sitemap more often than others.
In Google Search Console, XML sitemaps give you an added benefit. For every sitemap, Google will show you errors and warnings. You can use this by making different XML sitemaps for different types of URLs. This means you can see what types of URLs on your site have the most issues. Our Yoast SEO plugin does this for you automatically.
Problems that cause bad crawl efficiency
Many 404s and other errors
While it crawls your site, Google will encounter errors. It’ll usually just pick the next page from the pile when it does. If you have a lot of errors on your site during a crawl, Googlebot will slow down. It does that because it’s afraid that it’s causing the errors by crawling too fast. To prevent Googlebot from slowing down, you thus want to fix as much errors as you can.
Google reports all those errors to you in its Webmaster Tools, as do Bing and Yandex. We’ve covered errors in Google Search Console (GSC) and Bing Webmaster Tools before. If you have our Yoast SEO Premium plugin, you can import and fix the errors from GSC with it. You can do that straight from your WordPress admin.
You wouldn’t be the first client we see that has 3,000 actual URLs and 20,000 errors in GSC. Don’t let your site become that site. Fix those errors on a regular basis, at least every month.
Excessive 301 redirects
I was recently consulting on a site that had just done a domain migration. The site is big, so I used one of our tools to run a full crawl of the site and see what we should fix. It became clear we had one big issue. A large group of URLs on this site is always linked to without a trailing slash. If you go to such a URL without the trailing slash, you’re 301 redirected. You’re redirected to the version with the trailing slash.
If that’s an issue for one or two URLs on your site it doesn’t really matter. It’s actually often an issue with homepages. If that’s an issue with 250,000 URLs on your site, it becomes a bigger issue. Googlebot doesn’t have to crawl 250,000 URLs but 500,000. That’s not exactly efficient.
This is why you should always try to update links within your site when you change URLs. If you don’t, you’ll get more and more 301 redirects over time. This will slow down your crawl and your users. Most systems take up to a second to server a redirect. That adds another second onto your page load time.
Spider traps
If your site is somewhat more authoritative in Google’s eyes, fun things can happen. Even when it’s clear that a link doesn’t make sense, Google will crawl it. Give Google the virtual equivalent of an infinite spiral staircase, it’ll keep going. I want to share a hilarious example of this I encountered at the Guardian.
At the Guardian we used to have daily archives for all our main categories. As the Guardian publishes a lot of content, those daily archives make sense. You could click back from today, to yesterday and so on. And on. And on. Even to long before the Guardian’s existence. You could get to December 25th of the year 0 if you were so inclined. We’ve seen Google index back to the year 1,600. That’s almost 150,000 clicks deep.
Improve your technical SEO skills!
Which technical SEO errors hurt your site?
Solve them and climb the rankings!
Improve your site speed on the go
On-demand SEO training by Yoast
$199 - Buy now ▸
More info
This is what we call a “spider trap“. Traps like these can make a search engines crawl extremely inefficient. Fixing them almost always leads to better results in organic search. The bigger your site gets, the harder issues like these are to find. This is true even for experienced SEOs.
Tools to find issues and improve crawl efficiency
If you’re intrigued by this and want to test your own site, you’re going to need some tools. We used Screaming Frog a lot during our site reviews. It’s the Swiss army knife of most SEOs. Some other SEOs I know swear by Xenu, which is also pretty good (and free). Be aware: these are not “simple” tools. They are power tools that can even take down a site when used wrong, so take care.
A good first step is to start crawling a site and filter for HTML pages. Then sort descending by HTTP status code. You’ll see 500 – 400 – 300 type responses on the top of the list. You’ll be able to see how bad your site is doing, compared to the total number of URLs. See an example below:
How’s your sites crawl efficiency?
I’d love to hear if you’ve had particular issues like these with crawl efficiency and how you solved them. Even better if this post helped you fix something, come tell us below!
Read more: robots.txt: the ultimate guide »
The post Crawl efficiency: making Google’s crawl easier appeared first on Yoast.
from Yoast • SEO for everyone https://yoast.com/crawl-efficiency/
0 notes
philipfloyd · 7 years ago
Text
Does The Meta Description Tag Affect SEO & Search Engine Rankings?
There’s a lot of confusion when it comes to meta descriptions and SEO. Do they affect search engine rankings? Is it worth spending the time to write a good meta description?
  Well, in theory, meta descriptions do not affect SEO. This is an official statement from Google, released in 2009. However, since meta descriptions show in the search engine results, they can affect CTRs (click through rates), which are linked to SEO & rankings. So, in practice, meta descriptions might have an impact on SEO.
  Things are a lot more complicated than that. In order to better understand the topic, I recommend you to continue reading this article. It will explain everything you need to know about the importance of meta description tags.
  What Is the Meta Description Tag?
Why Meta Descriptions Don’t Matter (In Theory)
Why Meta Descriptions Matter (In Practice)
How to Add Meta Description Tags
How to Write Good Meta Description Tags
  Before we start, let’s make it clear for everyone what a meta description is.
  What Is the Meta Description Tag?
  HTML runs with tags. A tag is a set of characters that result in a command. In web development, these tags specify to the browser how elements should be structured on a web page.
  The <meta> tag is a tag that contains data about the web page itself. It’s the data behind the data, just like in metaphysics. In our case, the meta description provides a summary of what the page is about. The users read it to figure out what they’re about to click on.
  Google uses the meta descriptions (along with the title tag and the URL) to come up with search results on its pages. You can usually recognize the meta description by taking a look at a web page’s search result. Here’s a branded search for the keyword “cognitiveSEO”:
    If the meta description tag is missing, Google will just pick a piece of content from your page, which it thinks is the most relevant for the user.
  I have a good example from no other than Wikipedia, the 5th most popular website on the planet. I’m not sure if it’s missing on this page only or if it’s a general thing, but… well, the meta description tag isn’t there.
  But how do I know there’s no meta description tag? Well, the correct way of checking a meta description tag is to look into the source code of a web page. You can do this by hitting CTRL + U in Chrome, or by right-clicking the web page and selecting View Page Source.
    Then, just use the find function (CLTR + F) to search for <meta name=”description”. If you get the result, it’s there and you can view it. You can also see if it matches the one on the search results. If it’s not there… well… it’s not there.
    As you can see, in Wikipedia’s case, it’s not there. You can also search for variations, such as “description” or “<meta” in case the tag is written in a slightly different way. I made sure that Wikipedia doesn’t have it.
  Even if the meta description tag is missing, Google does display something pretty relevant in the search results.
    It’s actually a phrase taken from the content of the page. In this case, it’s actually the first paragraph. You can view it in one of the images above.
  Although widely believed so, Google doesn’t always pick the meta description tag to display it in the search. Even if the meta description tag is there, Google can still alter the text shown in the results pages. In fact, Google always chooses to pick what it wants. For example, if I add another keyword found in the homepage, Google will alter the text in the results to also show parts from that text.
    Sometimes, if Google thinks the description is completely irrelevant for the article, it will ignore it completely and pick something else. Having control over what shows in the search engine results is very useful. You should craft your meta description well, so that Google picks that instead of anything else.
  We’ll talk about how to write good meta descriptions for SEO soon in the article, so keep reading.
  Why Meta Descriptions Don’t Matter (In Theory)
  A long time ago, meta descriptions used to impact SEO directly. Search engines would parse these lines of text and look for keywords. A page would then be boosted on a keyword phrase if that particular keyword phrase was also found in the meta description (among with title tag and content).
  I think you’ve figured out what followed. Obviously, people tried to abuse it by stuffing keywords in it, in an attempt to rank for as many phrases as possible. Since Google doesn’t like it when people abuse ranking factors, it made a big change in its algorithm, affecting everyone.
  If this sounds familiar, then you might be mixing up the meta description tag with the meta keywords tag, which is actually obsolete (at least for Google).
Another common confusion is that meta descriptions matter for SEO and only meta keywords don’t matter.
I’ll clarify this right now:
  In 2009, Google officially released a statement entitled “Google does not use the keywords meta tag in web ranking“. But, because they were lazy, many people didn’t read through the entire post and missed the following section:
  Even though we sometimes use the description meta tag for the snippets we show, we still don’t use the description meta tag in our ranking. Google  
  Even more, there is a video of Matt Cutts (Former Head of Spam @ Google) that talks only about the meta keywords tag not being used as a search ranking factor:
youtube
    This doesn’t mean that Google completely ignores these meta tags, or all meta tags, as some users might think.
    Although some people still use the meta keywords tag, it’s usually a good idea to show Google that you’re keeping up with the news. Older websites still use meta keywords internally, for things like product search or sorting. It’s also useful for Google Search Appliance. However, if you don’t use them, there’s no point in showing them.
  Important note: Other search engines, like Yandex, still use the meta keywords tag as a search ranking factor. So if you’re trying to rank there, you should keep them. If your site is multilingual, you can try to keep the keywords meta tag for the /ru version and remove it from other languages that mainly use Google.
  Why Meta Descriptions Matter (In Practice)
  Ok, so Google says that meta descriptions don’t alter search engine rankings. But Google says many things and it’s also hard to find accurate information. As seen above, you have to carefully extract the semantics between the lines to clearly understand a topic. There’s more about the meta descriptions than just rankings.
  Should you just ditch them altogether, if they don’t help with rankings? Hell no!
  The description meta tag has a great importance and, indirectly, it might even affect search engine rankings.
  How? Well, the short answer is: through CTR (click through rate).
  Meta Descriptions Can Affect CTR
  Click through rate represents the percentage of users that click on your search result from the total number of users that view your search result. It’s the same principles as a conversion rate, but with a different, more specific name.
  The formula for calculating CTR is: Clicks / Impressions * 100. So, if you had 700 impressions and 20 people clicked on it, your CTR would be 20/700 * 100 = 2.86%. Or, much easier if you have 100 impressions and 5 clicks, then the click through rate would be 5%.
  But why do meta descriptions affect CTR? Well, the answer to that is pretty simple. On the search page, you have 3 elements that mix up in order to create your search result:
  Title tag
URL
Description (Meta description tag or other piece of text)
  If you take a look at the results on a Google search page, you can see that the meta description takes up about 50% of the space. Sometimes, even more.
    Even if the title tag is the most important one, description still matters. Google bolds important phrases in the description, which catch the eye. In order to make sense of the bolded phrases, you have to read the entire sentence or at least part of it.
  The following example is a localized search result, so your search engine results might differ from mine. Nonetheless, it’s a good example of something that catches the eye. Now I know nothing about shoes, so in my opinion, branding here doesn’t matter. For me, the most interesting result is the last one.
  Why? Because it mentions things I’m actually interested about, before actually seeing the shoes. I might not like them, of course (although there are over 15000 models to choose from), but I sure do want that 85% discount and prices as low as $15. For example, ASOS lists something about women. Well… I’m a man (a boy, at least) and I don’t care. Yoox has the luxury of getting 2 positions, which is great and also states something about delivery and online payments.
    Now there could be a lot of other factors why the other websites are ranking higher. For once, shoes.com’s title is too long and the URL sucks. Other websites might be linked to locations closer to mine. Then we could go into technical SEO issues and number of referring domains.
  Anyway, the results were inconsistent. Changing location from US to UK showed different results. For example, 6PM sometimes shown its own meta description and other times just showed menu elements, like in the picture above. In other cases, shoes.com showed at the top.
  Now, the question that remains is: Does CTR affect SEO? Well, according to Google, yes! Brian Dean also mentioned this in his famous article about top Google ranking factors, at number 135. A wide number of other experts also agree that CTR is a Google ranking factor.
    You see, Google gives a lot of importance to the user’s experience. If a user doesn’t click on your result, it has no experience whatsoever. Your score is basically 0. Only after the user clicks on your result can Google take a look at things like scroll count or bounce rate.
  Meta Descriptions Are Used by Other Web Platforms
  Secondly, meta descriptions might be important for other platforms. Just to mention a few of them, think of Facebook and Twitter. When you share a link on these platforms, they extract data from the HTML in order to display titles, images and descriptions.
  Now these very popular platforms have their own tags. Twitter has Twitter Cards and Facebook uses the Open Graph tags to correctly display the required information.
  However, many other web platforms might not have their own universally accepted tags and could rely on standard meta tags. Meta tags can also be important for apps and browsers.
  Take advantage of custom tags, such as Twitter Cards and Facebook Open Graph. These will enable you to use different titles, descriptions and images across different platforms, which can increase your CTRs.
  What works on search engines might not work on Facebook and what works on Facebook might not work on Twitter. This is mostly true with images. Facebook, for example, requires a specific size to display images nicely. Your website’s featured image might require a different size. Open Graph provides a solution to this issue.
  How to Add Meta Description Tags
  Writing a meta description tags is easy. Adding it is even easier. You just have to edit your HTML template from the cPanel of your web hosting service and add the following code between your <head> & </head> tag in the page.
  <meta name=”description” content=”Here goes the description that you want to be shown in Google”>
  Sounds hard? Don’t worry, I’m joking. Although you can do it on plain HTML websites, most websites use databases which makes things different.
  If you’re running on a popular CMS (Content Management System) like WordPress, you can use a plugin. The most popular one is Yoast SEO. It’s pretty easy to use. After you install it, a box will appear under the article’s body, in the editor.
    By default, WordPress uses the article/page heading as the <title> tag. A big advantage of using an SEO plugin is that you can use different titles on your site and on Google.
  For example, maybe your heading doesn’t fit Google’s search engine results, but you really like it. You can keep that on your site and use the SEO plugin to display something shorter for the search engines.
  If you click the preview, it will enable you to edit the fields. Right now, my fields are empty, but by the time I publish this article they will be filled up accordingly. If you click the social media tab on the left, it will also enable you to enter social media titles, descriptions and images. This will add the correct tags to your HTML and display the proper elements on each platform.
  There are similar plugins for most popular platforms. Some themes even have their own SEO fields sections. I recommend that you use Yoast though. It’s definitely the best.
  How to Write Good Meta Description Tags
  It’s difficult to master the art of copywriting. It is definitely more complex than I can cover in this section and takes years of practice, trial and error to improve. When you write meta descriptions, think about selling the user something in one short paragraph. You’re selling him the click to your website.
  Focus keyword: First of all, let’s discuss about keywords. I’ve stated that you shouldn’t stuff keywords in when writing meta description tags. While this is good advice, it doesn’t mean that you shouldn’t add keywords at all.
  It’s important to add focus keywords to your descriptions. Why? Because Google bolds them and people like to see what they search for. If what they search for is in bold on your description, it reassures them that what they’re looking for lies behind that link.
    If you’re targeting multiple keyword phrases, conduct some keyword research to identify the most important one. Use that in your description if others don’t fit. Which brings us to the next important aspect…
  Length: Secondly, make sure you don’t exceed the number of characters. You might get your description cut in the wrong place and it won’t have the same impact anymore. As of May 2018, the recommended length is around 160-180 characters. Google confirmed shortening the length back to its initial state, after expanding it to around 300 characters, less than 6 months before, in December 2017.
  Interesting & Relevant: You don’t have to deceive people. You just have to make them curious or excited. State out your discounts and premium features. Things that your competition doesn’t have. It’s interesting how people add discounts and interesting things in their paid search (Google Adwords) descriptions, but stuff keywords instead in meta descriptions for SEO. They serve the same purpose!
  Call to Action: People like to be told what to do. If you view the source of this page (CTRL + U) and search for the description meta tag, you’ll see that I tell users to click the link. If you’re here through search engine rankings, it’s a clear sign that it worked. Add a call to action!
  Every page should have its own, personalized meta description. This might be difficult for large eCommerce stores. You can use patterns to generate similar descriptions but with different elements, such as focus keyword and discount amount. However, be careful with duplicate meta descriptions, as this could tell Google that you have very similar pages and that could harm your website.
  Conclusion
  In short, Google says that meta descriptions do not directly impact search engine rankings. However, these descriptions can indirectly impact SEO through click through rates.
  This is very important when you’re fighting between 1st, 2nd and 3rd place. There, things like technical SEO and backlinks tend to matter less. Google also has more room to play with the CTR, as the top 3 positions get about 80% of all the clicks.
  Make sure you take advantage of the meta description tag and write your descriptions in a clever way, to convince users to click your link.
  Do you use meta descriptions? What’s your experience with changing them? Did they have any impact on your click through rates? Let us know in the comments section. Also, if you have any questions, feel free to ask them. We’ll gladly reply.
The post Does The Meta Description Tag Affect SEO & Search Engine Rankings? appeared first on SEO Blog | cognitiveSEO Blog on SEO Tactics & Strategies.
from Marketing https://cognitiveseo.com/blog/19066/meta-description-affects-seo/ via http://www.rssmix.com/
0 notes
wjwilliams29 · 7 years ago
Text
Does The Meta Description Tag Affect SEO & Search Engine Rankings?
There’s a lot of confusion when it comes to meta descriptions and SEO. Do they affect search engine rankings? Is it worth spending the time to write a good meta description?
  Well, in theory, meta descriptions do not affect SEO. This is an official statement from Google, released in 2009. However, since meta descriptions show in the search engine results, they can affect CTRs (click through rates), which are linked to SEO & rankings. So, in practice, meta descriptions might have an impact on SEO.
  Things are a lot more complicated than that. In order to better understand the topic, I recommend you to continue reading this article. It will explain everything you need to know about the importance of meta description tags.
  What Is the Meta Description Tag?
Why Meta Descriptions Don’t Matter (In Theory)
Why Meta Descriptions Matter (In Practice)
How to Add Meta Description Tags
How to Write Good Meta Description Tags
  Before we start, let’s make it clear for everyone what a meta description is.
  What Is the Meta Description Tag?
  HTML runs with tags. A tag is a set of characters that result in a command. In web development, these tags specify to the browser how elements should be structured on a web page.
  The <meta> tag is a tag that contains data about the web page itself. It’s the data behind the data, just like in metaphysics. In our case, the meta description provides a summary of what the page is about. The users read it to figure out what they’re about to click on.
  Google uses the meta descriptions (along with the title tag and the URL) to come up with search results on its pages. You can usually recognize the meta description by taking a look at a web page’s search result. Here’s a branded search for the keyword “cognitiveSEO”:
    If the meta description tag is missing, Google will just pick a piece of content from your page, which it thinks is the most relevant for the user.
  I have a good example from no other than Wikipedia, the 5th most popular website on the planet. I’m not sure if it’s missing on this page only or if it’s a general thing, but… well, the meta description tag isn’t there.
  But how do I know there’s no meta description tag? Well, the correct way of checking a meta description tag is to look into the source code of a web page. You can do this by hitting CTRL + U in Chrome, or by right-clicking the web page and selecting View Page Source.
    Then, just use the find function (CLTR + F) to search for <meta name=”description”. If you get the result, it’s there and you can view it. You can also see if it matches the one on the search results. If it’s not there… well… it’s not there.
    As you can see, in Wikipedia’s case, it’s not there. You can also search for variations, such as “description” or “<meta” in case the tag is written in a slightly different way. I made sure that Wikipedia doesn’t have it.
  Even if the meta description tag is missing, Google does display something pretty relevant in the search results.
    It’s actually a phrase taken from the content of the page. In this case, it’s actually the first paragraph. You can view it in one of the images above.
  Although widely believed so, Google doesn’t always pick the meta description tag to display it in the search. Even if the meta description tag is there, Google can still alter the text shown in the results pages. In fact, Google always chooses to pick what it wants. For example, if I add another keyword found in the homepage, Google will alter the text in the results to also show parts from that text.
    Sometimes, if Google thinks the description is completely irrelevant for the article, it will ignore it completely and pick something else. Having control over what shows in the search engine results is very useful. You should craft your meta description well, so that Google picks that instead of anything else.
  We’ll talk about how to write good meta descriptions for SEO soon in the article, so keep reading.
  Why Meta Descriptions Don’t Matter (In Theory)
  A long time ago, meta descriptions used to impact SEO directly. Search engines would parse these lines of text and look for keywords. A page would then be boosted on a keyword phrase if that particular keyword phrase was also found in the meta description (among with title tag and content).
  I think you’ve figured out what followed. Obviously, people tried to abuse it by stuffing keywords in it, in an attempt to rank for as many phrases as possible. Since Google doesn’t like it when people abuse ranking factors, it made a big change in its algorithm, affecting everyone.
  If this sounds familiar, then you might be mixing up the meta description tag with the meta keywords tag, which is actually obsolete (at least for Google).
Another common confusion is that meta descriptions matter for SEO and only meta keywords don’t matter.
I’ll clarify this right now:
  In 2009, Google officially released a statement entitled “Google does not use the keywords meta tag in web ranking“. But, because they were lazy, many people didn’t read through the entire post and missed the following section:
  Even though we sometimes use the description meta tag for the snippets we show, we still don’t use the description meta tag in our ranking. Google  
  Even more, there is a video of Matt Cutts (Former Head of Spam @ Google) that talks only about the meta keywords tag not being used as a search ranking factor:
youtube
    This doesn’t mean that Google completely ignores these meta tags, or all meta tags, as some users might think.
    Although some people still use the meta keywords tag, it’s usually a good idea to show Google that you’re keeping up with the news. Older websites still use meta keywords internally, for things like product search or sorting. It’s also useful for Google Search Appliance. However, if you don’t use them, there’s no point in showing them.
  Important note: Other search engines, like Yandex, still use the meta keywords tag as a search ranking factor. So if you’re trying to rank there, you should keep them. If your site is multilingual, you can try to keep the keywords meta tag for the /ru version and remove it from other languages that mainly use Google.
  Why Meta Descriptions Matter (In Practice)
  Ok, so Google says that meta descriptions don’t alter search engine rankings. But Google says many things and it’s also hard to find accurate information. As seen above, you have to carefully extract the semantics between the lines to clearly understand a topic. There’s more about the meta descriptions than just rankings.
  Should you just ditch them altogether, if they don’t help with rankings? Hell no!
  The description meta tag has a great importance and, indirectly, it might even affect search engine rankings.
  How? Well, the short answer is: through CTR (click through rate).
  Meta Descriptions Can Affect CTR
  Click through rate represents the percentage of users that click on your search result from the total number of users that view your search result. It’s the same principles as a conversion rate, but with a different, more specific name.
  The formula for calculating CTR is: Clicks / Impressions * 100. So, if you had 700 impressions and 20 people clicked on it, your CTR would be 20/700 * 100 = 2.86%. Or, much easier if you have 100 impressions and 5 clicks, then the click through rate would be 5%.
  But why do meta descriptions affect CTR? Well, the answer to that is pretty simple. On the search page, you have 3 elements that mix up in order to create your search result:
  Title tag
URL
Description (Meta description tag or other piece of text)
  If you take a look at the results on a Google search page, you can see that the meta description takes up about 50% of the space. Sometimes, even more.
    Even if the title tag is the most important one, description still matters. Google bolds important phrases in the description, which catch the eye. In order to make sense of the bolded phrases, you have to read the entire sentence or at least part of it.
  The following example is a localized search result, so your search engine results might differ from mine. Nonetheless, it’s a good example of something that catches the eye. Now I know nothing about shoes, so in my opinion, branding here doesn’t matter. For me, the most interesting result is the last one.
  Why? Because it mentions things I’m actually interested about, before actually seeing the shoes. I might not like them, of course (although there are over 15000 models to choose from), but I sure do want that 85% discount and prices as low as $15. For example, ASOS lists something about women. Well… I’m a man (a boy, at least) and I don’t care. Yoox has the luxury of getting 2 positions, which is great and also states something about delivery and online payments.
    Now there could be a lot of other factors why the other websites are ranking higher. For once, shoes.com’s title is too long and the URL sucks. Other websites might be linked to locations closer to mine. Then we could go into technical SEO issues and number of referring domains.
  Anyway, the results were inconsistent. Changing location from US to UK showed different results. For example, 6PM sometimes shown its own meta description and other times just showed menu elements, like in the picture above. In other cases, shoes.com showed at the top.
  Now, the question that remains is: Does CTR affect SEO? Well, according to Google, yes! Brian Dean also mentioned this in his famous article about top Google ranking factors, at number 135. A wide number of other experts also agree that CTR is a Google ranking factor.
    You see, Google gives a lot of importance to the user’s experience. If a user doesn’t click on your result, it has no experience whatsoever. Your score is basically 0. Only after the user clicks on your result can Google take a look at things like scroll count or bounce rate.
  Meta Descriptions Are Used by Other Web Platforms
  Secondly, meta descriptions might be important for other platforms. Just to mention a few of them, think of Facebook and Twitter. When you share a link on these platforms, they extract data from the HTML in order to display titles, images and descriptions.
  Now these very popular platforms have their own tags. Twitter has Twitter Cards and Facebook uses the Open Graph tags to correctly display the required information.
  However, many other web platforms might not have their own universally accepted tags and could rely on standard meta tags. Meta tags can also be important for apps and browsers.
  Take advantage of custom tags, such as Twitter Cards and Facebook Open Graph. These will enable you to use different titles, descriptions and images across different platforms, which can increase your CTRs.
  What works on search engines might not work on Facebook and what works on Facebook might not work on Twitter. This is mostly true with images. Facebook, for example, requires a specific size to display images nicely. Your website’s featured image might require a different size. Open Graph provides a solution to this issue.
  How to Add Meta Description Tags
  Writing a meta description tags is easy. Adding it is even easier. You just have to edit your HTML template from the cPanel of your web hosting service and add the following code between your <head> & </head> tag in the page.
  <meta name=”description” content=”Here goes the description that you want to be shown in Google”>
  Sounds hard? Don’t worry, I’m joking. Although you can do it on plain HTML websites, most websites use databases which makes things different.
  If you’re running on a popular CMS (Content Management System) like WordPress, you can use a plugin. The most popular one is Yoast SEO. It’s pretty easy to use. After you install it, a box will appear under the article’s body, in the editor.
    By default, WordPress uses the article/page heading as the <title> tag. A big advantage of using an SEO plugin is that you can use different titles on your site and on Google.
  For example, maybe your heading doesn’t fit Google’s search engine results, but you really like it. You can keep that on your site and use the SEO plugin to display something shorter for the search engines.
  If you click the preview, it will enable you to edit the fields. Right now, my fields are empty, but by the time I publish this article they will be filled up accordingly. If you click the social media tab on the left, it will also enable you to enter social media titles, descriptions and images. This will add the correct tags to your HTML and display the proper elements on each platform.
  There are similar plugins for most popular platforms. Some themes even have their own SEO fields sections. I recommend that you use Yoast though. It’s definitely the best.
  How to Write Good Meta Description Tags
  It’s difficult to master the art of copywriting. It is definitely more complex than I can cover in this section and takes years of practice, trial and error to improve. When you write meta descriptions, think about selling the user something in one short paragraph. You’re selling him the click to your website.
  Focus keyword: First of all, let’s discuss about keywords. I’ve stated that you shouldn’t stuff keywords in when writing meta description tags. While this is good advice, it doesn’t mean that you shouldn’t add keywords at all.
  It’s important to add focus keywords to your descriptions. Why? Because Google bolds them and people like to see what they search for. If what they search for is in bold on your description, it reassures them that what they’re looking for lies behind that link.
    If you’re targeting multiple keyword phrases, conduct some keyword research to identify the most important one. Use that in your description if others don’t fit. Which brings us to the next important aspect…
  Length: Secondly, make sure you don’t exceed the number of characters. You might get your description cut in the wrong place and it won’t have the same impact anymore. As of May 2018, the recommended length is around 160-180 characters. Google confirmed shortening the length back to its initial state, after expanding it to around 300 characters, less than 6 months before, in December 2017.
  Interesting & Relevant: You don’t have to deceive people. You just have to make them curious or excited. State out your discounts and premium features. Things that your competition doesn’t have. It’s interesting how people add discounts and interesting things in their paid search (Google Adwords) descriptions, but stuff keywords instead in meta descriptions for SEO. They serve the same purpose!
  Call to Action: People like to be told what to do. If you view the source of this page (CTRL + U) and search for the description meta tag, you’ll see that I tell users to click the link. If you’re here through search engine rankings, it’s a clear sign that it worked. Add a call to action!
  Every page should have its own, personalized meta description. This might be difficult for large eCommerce stores. You can use patterns to generate similar descriptions but with different elements, such as focus keyword and discount amount. However, be careful with duplicate meta descriptions, as this could tell Google that you have very similar pages and that could harm your website.
  Conclusion
  In short, Google says that meta descriptions do not directly impact search engine rankings. However, these descriptions can indirectly impact SEO through click through rates.
  This is very important when you’re fighting between 1st, 2nd and 3rd place. There, things like technical SEO and backlinks tend to matter less. Google also has more room to play with the CTR, as the top 3 positions get about 80% of all the clicks.
  Make sure you take advantage of the meta description tag and write your descriptions in a clever way, to convince users to click your link.
  Do you use meta descriptions? What’s your experience with changing them? Did they have any impact on your click through rates? Let us know in the comments section. Also, if you have any questions, feel free to ask them. We’ll gladly reply.
The post Does The Meta Description Tag Affect SEO & Search Engine Rankings? appeared first on SEO Blog | cognitiveSEO Blog on SEO Tactics & Strategies.
0 notes