#Step-by-step guide to indexing backlinks by Google Indexing API
Explore tagged Tumblr posts
Text
How to Index Webpages & Backlinks in 5 Minutes by Google Indexing API in Hindi - (Free & Very Easy)
youtube
Get Your Webpages and Backlinks Indexed Immediately by Using Google Indexing API In this video, we have explained the step-by-step process of how you can use Google’s new indexing API to get your website’s pages and backlinks crawled immediately. The process of setting this up isn’t typically very easy, but if you watch this video carefully and follow the given steps, I am sure you can save your time and the internet. You can get higher ranking in Search Engine. So, without further delay – let’s watch the full video and get indexed your backlinks and webpages. I hope that you were able to make great use of this Video to help you get up and running with Google’s Indexing API. Indexing process and Code: https://docs.google.com/document/d/10lIOtorCubWV94Pzz0juHUOG7c1-pqEit6s94qgFd6s/edit#heading=h.vyd4fqe3e5al
#API for backlinks indexing#How to index backlinks instantly?#How to index webpages instantly?#How to Use the Indexing API#Step-by-step guide to indexing backlinks by Google Indexing API#How to index backlinks with Google Indexing API?#Google Indexing API#Backlink indexing#Google Indexing API with Python#Backlink Indexing tool#How To Index Backlinks Faster In 2023?#How to Index Backlinks Faster Than Ever Before?#The Ultimate Guide To Google Indexing API#How to index backlinks quickly in Google?#Youtube
0 notes
Text
Google Shares Details About the Technology Behind Googlebot
Posted by goralewicz
Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.
The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there's a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it's extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.
Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:
Interestingly, this tweet didn’t get nearly as much attention as I would expect.
So what does Ilya’s revelation in this tweet mean for SEOs?
Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.
You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.
Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.
It will be updated sometime in the future
Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:
I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!
Google uses Chrome 41 for rendering. What does that mean?
We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?
Not so fast. Here is what Ilya Grigorik had to say in response to this question:
We now know WRS' capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.
Don’t get your hopes up
All that said, there are a few reasons to keep your excitement at bay.
Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://ift.tt/2yqUsXV using Chrome 41, and then open it in any up-to-date browser you are using.
The page in Chrome 41 looks like this:
The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you're seeing above can be solved if diagnosed properly. Let me quote Ilya:
"If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it."
I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.
The Fetch and Render tool is the Chrome v. 41 preview
There's another interesting thing about Chrome 41. Google Search Console's Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page") is generated by the Google Search Console bot, which is... Chrome 41.0.2272.118 (see screenshot below).
Zoom in here
There's evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we're not able to point them out.
Chrome 41 vs Fetch as Google: A word of caution
Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there's a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.
Chrome 41 preview:
Image blurred for privacy
The above page has quite a lot of content and images, but it looks completely different in Google Search Console.
Google Search Console preview for the same URL:
As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.
From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.
This leads us to a question...
What features are supported by Googlebot and WRS?
According to the Rendering on Google Search guide:
Googlebot doesn't support IndexedDB, WebSQL, and WebGL.
HTTP cookies and local storage, as well as session storage, are cleared between page loads.
All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
Google can’t index 3D and VR content.
Googlebot only supports HTTP/1.1 crawling.
The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.
No HTTP/2 support (still)
We've mostly been covering how Googlebot uses Chrome, but there's another recent discovery to keep in mind.
There is still no support for HTTP/2 for Googlebot.
Since it's now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.
There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.
Via http://ift.tt/2yr27W1...
Googlebot’s future
Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It's hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.
This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.
Chrome 41 vs. Googlebot’s crawling efficiency
Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it's crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”
Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.
In summary
Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we're testing ��� we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.
It's safe to assume that Chrome 41 will now be a part of every SEO’s toolset.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
1 note
·
View note
Text
RankFlux Review Discount And Bonus
RankFlux Review - Are you searching for more expertise regarding RankFlux? Please check out my straightforward review regarding it prior to selecting, to examine the weak points as well as strengths of it. Can it deserve your time and effort as well as money?
The Step-by-Step Guide to Improving Your Google Rankings Without Obtaining Penalized (Part 2)
Action # 2: Assess your current search ranking
In order to boost your ranking, you'll need to recognize where you stand currently. This is true even for brand-new sites. Luckily, there are a number of devices and also RankFlux resources that can assist you.
Use SERPs.com to examine your website's keyword ranking
If you have actually been targeting particular search phrases in page web content, use SERPs.com to analyze your rank. Key in your search phrase (I used "content advertising and marketing" in this example) as well as your website URL. You additionally have various other alternatives like considering Yahoo results, choosing between desktop computer or mobile, and also piercing down on local rankings by city or ZIP code.
The outcomes web page will certainly show a few items:
Where your website places in internet search engine for the term
The initial page on your website that comes up in search results page for the subject
Your typical month-to-month searches
Your average cost-per-click for the keyword phrase (for paid search like Google AdWords)
Examine your website speed
Next, it is necessary to examine site rate, as this is a major Google ranking element.
If your website is sluggish, you have little chance of a high search setting. It will certainly impact your capacity to transform and sell brand-new customers, as well.
According to WebPerformanceToday, Walmart experienced a sharp decrease in conversions when its page load times enhanced from one to 4 seconds.
If this occurs, it does not matter what your on-page Search Engine Optimization, meta description, or title tags are. The search formula will punish you, also if you're a titan like Walmart.
That's why it is necessary to run your own RankFlux website rate examination to determine how to enhance Google positions. There are loads of tools that can help you do this. Some of my favorites are:
Google PageSpeed Insights
WebPageTest
YSlow browser expansion
Pingdom Web Site Rate Examination
Rigor
Here's how to evaluate your web page content rate with Pingdom.
Go to tools.pingdom.com as well as enter your URL. Pick the place you want to examine from and click "Begin Test."
Quick Sprout registers an efficiency grade of 81. As long as your site signs up over 50, that's a good begin.
If you get an efficiency grade of much less than 50, your web page material is really slow and also you require to work on enhancing it.
Check the page load time, as well.
Quick Sprout is doing quite well at 1.89 seconds. Aim for under 2 secs for a truly quick site as well as under one second for mobile phones. According to a research mentioned by Optimizely, man-made latency consisted of in the Telegraph's web site created an 11% decrease in page sights for a 4 second delay in packing time and a 44% decline for a 20 2nd delay. For that reason, anything more than a couple of secs of packing time could create you to shed a substantial amount of web traffic.
Examine your website's wellness
After considering keyword search engine rank as well as website speed, evaluate the health of your website prior to you begin to optimize.
Have you experienced an unexpected decrease in natural traffic after months or years of consistency?
Are you asking yourself whether Google has deindexed (or prohibited) your site?
There are a lot of excellent devices to help assemble this puzzle. Attempt the MxToolBox Domain Health Record device to look for significant issues in 5 different groups: general website problems, blacklist, mail web server, web server, and also DNS.
Simply click the box for every classification to see specific mistakes as well as cautions. From there, you can function one-by-one to repair them.
If you wish to inspect if you have actually been penalized by among the significant algorithm updates, look into FE International's.
RankFlux Review & Overview
Designer: Seun Ogundele
Item: RankFlux
Launch Date: 2019-Apr-05
Introduce Time: 11:00 EDT
Front-End Price: $27
Sales Web Page: https://www.socialleadfreak.com/rankflux-review/
Specific niche: Software program
What Is RankFlux?

RankFlux is a wordpress plugin that Gets You Limitless Google Web Page # 1 Rankings & FREE Targeted BUYER website traffic, Sales as well as Leads In One Click!
It enable you to Swiftly & Conveniently Out-rank Your Competitors as well as obtain web page # 1 ranking 99.9% of Time No Matter Any Type Of Google Update by Uncovering Hidden High Website Traffic Low Competitive Keywords In Seconds that you can contribute to your site in one click ...
In just few clicks setup this FREE Traffic Application, sit down, kick back and also lets this plugin rank your web site and obtain you targeted web traffic every min.
RankFlux Characteristics & Conveniences
Obtains 100s of Google Page # 1 Positions & FREE targeted BUYER Web traffic In No Time.
Key phrase Finder: Discover high traffic, pertinent, low competition keywords for your web site, blog or company.
Rival Analysis: Complete SEO competition evaluation of any key words, Rapidly & Quickly Out-rank Your Competitors on Google web page # 1.
Google Suggest Keyword: Long Tail Keyword Phrase Suggestions & Optimization consisting of search quantity, CPC & Pay Per Click.
Ranking SPY: Obtain key words that any domain name and also any websites are ranking for.This device gets the following:( Organic as well as paid keyword phrases, Key phrase placements, Results snippets, Bonus SERP aspects, Search quantity, Expense per click, Competitors value, Website positions, Website traffic cost, Estimated website traffic value).
Web Content Evaluation for Longtail Search Phrase: Aside the key phrase, Rankflux checks your material and also recommend the optimization for every key words, such as keyword destiny, search phrase format.
Long Tail Key Words Extraction From Content: It essence useful search phrase in your content and also recommend you the longtail key words from your content.
Spy Rival Backlinks Technique: RankFlux examine exactly how your competitor rank their keyword phrases with backlink strategy. It obtain you details like:( Examine their backlink resources, Back links analyze: traffic, domain strength, Backlink kind: affiliate,. gov, shortlink, organic web link, naked link).
OnPage Search Engine Optimization Analysis: Can provide you with a complete listing of on-site links. It assists to boost the inner linking patterns of the site and also remove busted pages.In enhancement to links you'll obtain:( LINK of connected web pages, Nofollow/dofollow features, Text after and also before the support, Importance, Web link type, Link condition).
Concern Keyword Phrase Recommendation Device: Enhance your web content with question suggestion from the Google question.
- Generate far better headline utilizing inquiry. - Get Search Quantity of the concern search phrases.
1-Click Report Generation: Download Keywords & Search Engine Optimization Report into a CSV, EXCEL, PDF, TXT or PRINT data in 1-CLICK.
Automatic Search Engine Submitter: Instantly notify the online search engine (Google, Yahoo, Bing, Yandex) that you have new blog post to index, so your web content will certainly be indexed instantly after publish.
Auto Generate Focus Key Words.
Auto-Optimize META description for message.
Car ADD Lengthy Tail Search Phrases to Post/Page Title, Body, Tags, Categories & Menu
Multiple Nation & Multilingual Supported: Look as well as Assess Keyword based on any kind of country or language.
Faster Site Indexer: Quicken your index time to almost right away, Super valuable for News internet site & Blog sites.
Smart Internal Linking Manager: Enhance interior connecting instantly, automobile web link your Longtail key words with Tags/terms in your web content.
No API Required: No third event service or API is needed, with RankFlux simply plug and play.
Plug & Play Automated Plugin service.
Final thought
"It's A Great Deal. Should I Invest Today?"
Not only are you getting accessibility to RankFlux for the very best rate ever before used, but additionally You're investing totally without danger. RankFlux include a 30-day Cash Back Guarantee Plan. When you choose RankFlux, your contentment is ensured. If you are not totally pleased with it for any type of reason within the first thirty day, you're qualified to a full refund - no question asked. You have actually obtained nothing to shed! What Are You Waiting on? Try It today and also get The Adhering to Incentive Now!
0 notes
Text
2018-04-04 12 SEO now
SEO
Ahrefs Blog
How many websites are mining cryptocurrency? We analyzed 175M+ domains to find out.
9 Actionable Ways To Find Anyone’s Email Address [Updated for 2018]
White Hat SEO: How to Play by the Rules and Win
How to Find and Fix Keyword Cannibalization Issues (in Seconds)
How to Use Google Trends for Keyword Research: 7 Effective Ways
ClickZ
Real-time advisor brands: Mobile customers usher in a new age of AI assistance
Augmented reality, outer space and emerging technology: How USA TODAY is revolutionizing journalism
Helping you choose the right SEO tool
Google, Amazon and the relationship between paid search and ecommerce
Crypto marketing: What does the recent Google/Twitter Ad Ban actually mean
Local SEO guide
Hacked Site Manual Action Report from Google? Try Bing To Find It
Peace Out To You, Sugar Rae!
Who Is Next In Local Listings Management?
A NodeJS Script for Accessing the Google Search Console API
Why SEOs Will Always Have Jobs…
Moz
The Guide to Local Sponsorship Marketing - The 2018 Edition
How to Target Featured Snippet Opportunities - Whiteboard Friday
MozCon 2018: The Initial Agenda
Just How Much is Your Website Worth, Anyhow? An Easy Guide to Valuation
Presenting Your Findings: How to Create Relevant and Engaging SEO Reports - Next Level
Reddit SEO
google analytics question
What are the best ways to start SEO as a beginner?
Free SEO Tools, like really free ?
Does anyone know anything about the reputation of a company called Main Street Host?
What besides backlinking can someone do without having access to a website's content?
SEO Book Blog
Left is Right & Up is Down
Grist for the Machine
Virtual Real Estate
Rank Checker Update
DMOZ Shut Down
SEO by the Sea
Related Questions are Joined by ‘People Also Search For’ Refinements; Now Using a Question Graph
Google’s Mobile Location History
Does Google Use Latent Semantic Indexing?
Google Targeted Advertising, Part 1
Google Giving Less Weight to Reviews of Places You Stop Visiting?
Search Engine Journal
Snapchat Now Offers Group Video Chats by @MattGSouthern
Third-party Instagram Apps Are Broken as Company Reportedly Limits API Access by @MattGSouthern
Google Enhances Movie Search With Ratings, Showtimes, and More by @MattGSouthern
Google’s New Shopping Actions Program: What You Need to Know by @ebkendo
3 Best Practices for Dynamic Search Ads
Search Engine Land
Updated: Google’s Ben Gomes expands role to head all of search as John Giannandrea moves to Apple to head AI
SearchCap: Google’s AMP Project, John Harrison Google doodle & more
Learn the benefits of pay-per-call for affiliate marketing
Google’s AMP Project announces new consent component ahead of GDPR compliance deadline
How to use the Knowledge Graph for higher rankings
Search Engine Roundtable
Daily Search Forum Recap: April 3, 2018
Google: Index Status & Index Coverage Reports Are Compiled Slightly Differently
Google: Structured Data Doesn't Give You A Ranking Boost But Can Help Rankings
Google Rickrolls SEOs With Recrawl Now Button
Ben Gomes To Take Over Search At Google, Giannandrea Steps Down
Search Engine Watch
Hyperlocal SEO: What is it, and how can you make sure you do it right?
AI and machine learning: What you do and don’t need to know for SEO
How to migrate your WordPress website domain name
A guide to the standard reports in Google Analytics: Audience reports
12 tips to improve your reputation strategy
Searchmetrics Blog
Unwrapping the Secrets of SEO: Google Mobile-First Rollout Creates Opportunity to Profit or Perish
Memo to Modern Marketer: Seven Tips to Successfully Tag-Teaming Your Search and Content Strategy
Universal Search 2018: 5 Facts for Online Marketers
Google Core Algorithm Update: The Phantom Comes Out of the Shadows
NCAA March Madness: Painting the Court With Numbers
Yoast
Yoast SEO 7.2: Fixes and improvements
Is Fact Check for Yoast SEO a joke?
Get our new free plugin: Fact Check for Yoast SEO 1.4 Beta
Ask Yoast: Bad slug for Arabic URLs
How to use Yoast SEO to write an awesome meta description
0 notes
Quote
Search engine optimization (SEO) is essential for almost every kind of website, but its finer points remain something of a specialty. Even today SEO is often treated as something that can be tacked on after the fact. It can up to a point, but it really shouldn’t be. Search engines get smarter every day and there are ways for websites to be smarter too. The foundations of SEO are the same as they’ve always been: great content clearly labeled will win the day sooner or later — regardless of how many people try to game the system. The thing is, those labels are far more sophisticated than they used to be. Meta titles, image alt text, and backlinks are important, but in 2020, they’re also fairly primitive. There is another tier of metadata that only a fraction of sites are currently using: structured data. All search engines share the same purpose: to organize the web’s content and deliver the most relevant, useful results possible to search queries. How they achieve this has changed enormously since the days of Lycos and Ask Jeeves. Google alone uses more than 200 ranking factors, and those are just the ones we know about. SEO is a huge field nowadays, and I put it to you that structured data is a really, really important factor to understand and implement in the coming years. It doesn’t just improve your chances of ranking highly for relevant queries. More importantly, it helps make your websites better — opening it up to all sorts of useful web experiences. Recommended reading: Where Does SEO Belong In Your Web Design Process? What Is Structured Data? Structured data is a way of labeling content on web pages. Using vocabulary from Schema.org, it removes much of the ambiguity from SEO. Instead of trusting the likes of Google, Bing, Baidu, and DuckDuckGo to work out what your content is about, you tell them. It’s the difference between a search engine guessing what a page is about and knowing for sure. As Schema.org puts it: By adding additional tags to the HTML of your web pages — tags that say, "Hey search engine, this information describes this specific movie, or place, or person, or video" — you can help search engines and other applications better understand your content and display it in a useful, relevant way. Schema.org launched in 2011, a project shared by Google, Microsoft, Yahoo, and Yandex. In other words, it’s a ‘bipartisan’ effort — if you like. The markup transcends any one search engine. In Schema.org’s own words, “A shared vocabulary makes it easier for webmasters and developers to decide on a schema and get the maximum benefit for their efforts.” It is in many respects a more expansive cousin of microformats (launched around 2005) which embed semantics and structured data in HTML, mainly for the benefit of search engines and aggregators. Although microformats are currently still supported, the ‘official’ nature of the Schema.org library makes it a safer bet for longevity. JSON for Linked Data (JSON-LD) has emerged as the dominant underlying standard for structured data, although Microdata and RDFa are also supported and serve the same purpose. Schema.org provides examples for each type depending on what you’re most comfortable with. As an example, let’s say Joe Bloggs writes a review of Joseph Heller’s 1961 novel Catch-22 and publishes it on his blog. Sadly, Bloggs has poor taste and gives it two out of five stars. For a person looking at the page, this information would be understood unthinkingly, but computer programs would have to connect several dots to reach the same conclusion. With structured data, the following markup could be added to the page’s code. (This is a JSON-LD approach. Microdata and RDFa can be used to weave the same information into content): { "@context" : "http://schema.org", "@type" : "Book", "name" : "Catch-22", "author" : { "@type" : "Person", "name" : "Joseph Heller" }, "datePublished" : "1961-11-10", "review" : { "@type" : "Review", "author" : { "@type" : "Person", "name" : "Joe Bloggs" }, "reviewRating" : { "@type" : "Rating", "ratingValue" : "2", "worstRating" : "0", "bestRating" : "5" }, "reviewBody" : "A disaster. The worst book I've ever read, and I've read The Da Vinci Code." } } This sets in stone that the page is about Catch-22, a novel by Joseph Heller published on November 10th, 1961. The reviewer has been identified, as has the parameters of the scoring system. Different schemas can be combined (or tiered) to describe different things. For example, through tagging of this sort, you could make clear a page is the event listing for an open-air film screening, and the film in question is The Life Aquatic with Steve Zissou by Wes Anderson. Recommended reading: Better Research, Better Design, Better Results Why Does It Matter? Ok, wonderful. I can label my website up to its eyeballs and it will look exactly the same, but what are the benefits? To my mind, there are two main benefits to including structured data in websites: It makes search engine’s jobs much easier.They can index content more accurately, which in turn means they can present it more richly. It helps web content to be more thorough and useful.Structured data gives you a ‘computer perspective’ on content. Quality content is fabulous. Quality content thoroughly tagged is the stuff of dreams. You know when you see snazzy search results that include star ratings? That’s structured data. Rich snippets of film reviews? Structured data. When a selection of recipes appear, ingredients, preparation time and all? You guessed it. Dig into the code of any of these pages and you’ll find the markup somewhere. Search engines reward sites using structured data because it tells them exactly what they’re dealing with. (Large preview) Examine the code on the websites featured above and sure enough, structured data is there. (Large preview) It’s not just search either, to be clear. That’s a big part of it but it’s not the whole deal. Structured data is primarily about tagging and organizing content. Rich search results are just one way for said content to be used. Google Dataset Search uses Schema.org/Dataset markup, for example. Below are a handful of examples of structured data being useful: Recipes Reviews FAQs Voice queries Event listings Content Actions. There are thousands more. Like, literally. Schema.org even fast-tracked the release of markup for Covid-19 recently. It’s an ever-growing library. In many respects, structured data is a branch of the Semantic Web, which strives for a fully machine-readable Internet. It gives you a machine-readable perspective on web content that (when properly implemented) feeds back into richer functionality for people. As such, just about anyone with a website would benefit from knowing what structured data is and how it works. According to W3Techs, only 29.6% of websites use JSON-LD, and 43.2% don’t use any structured data formats at all. There’s no obligation, of course. Not everyone cares about SEO or being machine-readable. On the flip side, for those who do there’s currently a big opportunity to one-up rival sites. In the same way that HTML forces you to think about how content is organized, structured data gets you thinking about the substance. It makes you more thorough. Whatever your website is about, if you comb through the relevant schema documentation you’ll almost certainly spot details that you didn’t think to include beforehand. As humans, it is easy to take for granted the connections between information. Search engines and computer programs are smart, but they’re not that smart. Not yet. Structured data translates content into terms they can understand. This, in turn, allows them to deliver richer experiences. Resources And Further Reading “The Beginner's Guide To Structured Data For SEO: A Two-Part Series,” Bridget Randolph, Moz “What Is Schema Markup And Why It’s Important For SEO,” Chuck Price, Search Engine Journal “What Is Schema? Beginner‘s Guide To Structured Data,” Luke Harsel, SEMrush “JSON-LD: Building Meaningful Data APIs,” Benjamin Young, Rollout Blog “Understand How Structured Data Works,” Google Search for Developers “Marking Up Your Site With Structured Data,” Bing Incorporating Structured Data Into Website Design Weaving structured data into a website isn’t as straightforward as, say, changing a meta title. It’s the data DNA of your web content. If you want to implement it properly, then you need to be willing to get into the weeds — at least a little bit. Below are a few simple steps developers can take to weave structured data into the design process. Note: I personally subscribe to a holistic approach to design, where design and substance go hand in hand. Juggling a bunch of disciplines is nothing new to web design, this is just another one, and if it’s incorporated well it can strengthen other elements around it. Think of it as an enhancement to your site’s engine. The car may not look all that different but it handles a hell of a lot better. Start With A Concept I’ll use myself as an example. For five years, two friends and I have been reviewing an album a week as a hobby (with others stepping in from time to time). Our sneering, insufferable prose is currently housed in a WordPress site, which — under my well-meaning but altogether ignorant care — had grown into a Frankenstein’s monster of plugins. We are in the process of redesigning the site which (among other things) has entailed bringing structured data into the core design. Here, as with any other project, the first thing to do is establish what your content is about. The better you answer this question, the easier everything that follows will be. In our case, these are the essentials: We review music albums; Each review has three reviewers who each write a summary by choosing up to three favorite tracks and assigning a personal score out of ten; These three scores are combined into a final score out of 30; From the three summaries, a passage is chosen to serve as an ‘at-a-glance’ roundup of all our thoughts. Some of this may sound a bit specific or even a bit arbitrary (because it is), but you’d be surprised how much of it can be woven together using structured data. Below is a mockup of what the revamped review pages will look like, and the information that can be translated into schema markup: Even the most sprawling content is packed full of information just waiting to be tagged and structured. (Large preview) There’s no trick to this process. I know what the content is about, so I know where to look in the documentation. In this case, I go to Schema.org/MusicAlbum and am met with all manner of potential properties, including: albumReleaseType byArtist genre producer datePublished recordedAt There are dozens; some exclusive to MusicAlbum, others falling under the larger umbrella of CreativeWork. Digging deeper into the documentation, I find that the markup can connect to MusicBrainz, a music metadata encyclopedia. The same process unfolds when I go to the Review documentation. From that one simple page, the following information can be gleaned and organized: { "@context": "http://schema.org/", "@type": "Review", "reviewBody": "Whereas My Love is Cool was guilty of trying too hard no such thing can be said of Visions. The riffs roar and the melodies soar, with the band playing beautifully to Ellie Rowsell's strengths.", "datePublished": "October 4, 2017", "author": [{ "@type": "Person", "name": "André Dack" }, { "@type": "Person", "name": "Frederick O'Brien" }, { "@type": "Person", "name": "Marcus Lawrence" }], "itemReviewed": { "@type": "MusicAlbum", "@id": "https://musicbrainz.org/release-group/7f231c61-20b2-49d6-ac66-1cacc4cc775f", "byArtist": { "@type": "MusicGroup", "name": "Wolf Alice", "@id": "https://musicbrainz.org/artist/3547f34a-db02-4ab7-b4a0-380e1ef951a9" }, "image": "https://lesoreillescurieuses.files.wordpress.com/2017/10/a1320370042_10.jpg", "albumProductionType": "http://schema.org/StudioAlbum", "albumReleaseType": "http://schema.org/AlbumRelease", "name": "Visions of a Life", "numTracks": "12", "datePublished": "September 29, 2017" }, "reviewRating": { "@type": "Rating", "ratingValue": 27, "worstRating": 0, "bestRating": 30 } } And honestly, I may yet add a lot more. Initially, I found the things that are already part of a review page’s structures (i.e. artist, album name, overall score) but then new questions began to present themselves. What could be clearer? What could I add? This should obviously be counterbalanced by questions of what’s unnecessary. Just because you can do something doesn’t mean that you should. There is such a thing as ‘too much information’. Still, sometimes a bit more detail can really take a page up a notch. Familiarize Yourself With Schema There’s no way around it; the best way to get the ball rolling is to immerse yourself in the documentation. There are tools that implement it for you (more on those below), but you’ll get more out of the markup if you have a proper sense of how it works. Trawl through the Schema.org documentation. Whoever you are and whatever your website’s for, the odds are that there are plenty of relevant schemas. The site is very good with examples, so it needn’t remain theoretical. The step beyond that, of course, is to find rich search results you would like to emulate, visiting the page, and using browser dev tools to look at what they’re doing. They are often excellent examples of websites that know their content inside out. You can also feed code snippets or URLs into Google’s Structured Data Markup Helper, which then generates appropriate schema. Tools like Google’’s Structured Data Markup Helper are excellent for getting to grips with how structured data works. (Large preview) The fundamentals are actually very simple. Once you get your head around them, it’s the breadth of options that take time to explore and play around with. You don’t want to be that person who gets to the end of a design process, looks into schema options, and starts second-guessing everything that’s been done. Ask The Right Questions Now that you’re armed with your wealth of structured data knowledge, you’re better positioned to lay the foundations for a strong website. Structured data rides a fairly unique line. In the immediate sense, it exists ‘under the hood’ and is there for the benefit of computers. At the same time, it can enable richer experiences for the user. Therefore, it pays to look at structured data from both a technical and user perspective. How can structured data help my website be better understood? What other resources, online databases, or hardware (e.g. smart speakers) might be interested in what you’re doing? What options appear in the documentation that I hadn’t accounted for? Do I want to add them? It is especially important to identify recurring types of content. It’s safe to say a blog can expect lots of blog posts over time, so incorporating structured data into post templates will yield the most results. The example I gave above is all well and good on its own, but there’s no reason why the markup process can’t be automated. That’s the plan for us. Consider also the ways that people might find your content. If there are opportunities to, say, highlight a snippet of copy for use in voice search, do it. It’s that, or leave it to search engines to work it out for themselves. No-one knows your content better than you do, so make use of that understanding with descriptive markup. You don’t need to guess how content will be understood with structured data. With tools like Google’s Rich Results Tester, you can see exactly how it gives content form and meaning that might otherwise have been overlooked. Resources And Further Reading “Getting Started With Schema.org Using Microdata,” Schema.org “Schema.org Project Repository,” GitHub community “Structured Data Markup Helper,” Googe Webmasters “Add Structured Data To Your Web Pages,” Google Developers Codelabs “Rich Results Test,” Google Quality Content Deserves Quality Markup You’ll find no greater advocate of great content than me. The SEO industry loses its collective mind whenever Google rolls out a major search update. The response to the hysteria is always the same: make quality content. To that I add: mark it up properly. Familiarize yourself with the documentation and be clear on what your site is about. Every piece of information you tag makes it that much easier for it to be indexed and shared with the right people. Whether you’re a Google devotee or a DuckDuckGo convert, the spirit remains the same. It’s not about ranking so much as it is about making websites as good as possible. Accommodating structured data will make other aspects of your website better. You don’t need to trust tech to understand what your content is about — you can tell it. From reviews to recipes to audio search, developers can add a whole new level of sophistication to their content. The heart and soul of optimizing a website for search have never changed: produce great content and make it as clear as possible what it is and why it’s useful. Structured data is another tool for that purpose, so use it.
http://damianfallon.blogspot.com/2020/04/baking-structured-data-into-design.html
0 notes
Text
Baking Structured Data Into The Design Process
About The Author
Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable … More about Frederick …
Retrofitting search engine optimization only gets you so far. As metadata gets smarter, it’s more important than ever to build it into the design process from the start.
search engine optimization (SEO) is essential for almost every kind of website, but its finer points remain something of a specialty. Even today SEO Company is often treated as something that can be tacked on after the fact. It can up to a point, but it really shouldn’t be. Search engines get smarter every day and there are ways for websites to be smarter too.
The foundations of SEO Company are the same as they’ve always been: great content clearly labeled will win the day sooner or later — regardless of how many people try to game the system. The thing is, those labels are far more sophisticated than they used to be. Meta titles, image alt text, and backlinks are important, but in 2020, they’re also fairly primitive. There is another tier of metadata that only a fraction of sites are currently using: structured data.
All search engines share the same purpose: to organize the web’s content and deliver the most relevant, useful results possible to search queries. How they achieve this has changed enormously since the days of Lycos and Ask Jeeves. Google alone uses more than 200 ranking factors, and those are just the ones we know about.
SEO Company is a huge field nowadays, and I put it to you that structured data is a really, really important factor to understand and implement in the coming years. It doesn’t just improve your chances of ranking highly for relevant queries. More importantly, it helps make your websites better — opening it up to all sorts of useful web experiences.
Recommended reading: Where Does SEO Belong In Your Web Design Process?
What Is Structured Data?
Structured data is a way of labeling content on web pages. Using vocabulary from Schema.org, it removes much of the ambiguity from SEO Company. Instead of trusting the likes of Google, Bing, Baidu, and DuckDuckGo to work out what your content is about, you tell them. It’s the difference between a search engine guessing what a page is about and knowing for sure.
As Schema.org puts it:
By adding additional tags to the HTML of your web pages — tags that say, “Hey search engine, this information describes this specific movie, or place, or person, or video” — you can help search engines and other applications better understand your content and display it in a useful, relevant way.
Schema.org launched in 2011, a project shared by Google, Microsoft, Yahoo, and Yandex. In other words, it’s a ‘bipartisan’ effort — if you like. The markup transcends any one search engine. In Schema.org’s own words,
“A shared vocabulary makes it easier for webmasters and developers to decide on a schema and get the maximum benefit for their efforts.”
It is in many respects a more expansive cousin of microformats (launched around 2005) which embed semantics and structured data in HTML, mainly for the benefit of search engines and aggregators. Although microformats are currently still supported, the ‘official’ nature of the Schema.org library makes it a safer bet for longevity.
JSON for Linked Data (JSON-LD) has emerged as the dominant underlying standard for structured data, although Microdata and RDFa are also supported and serve the same purpose. Schema.org provides examples for each type depending on what you’re most comfortable with.
As an example, let’s say Joe Bloggs writes a review of Joseph Heller’s 1961 novel Catch-22 and publishes it on his blog. Sadly, Bloggs has poor taste and gives it two out of five stars. For a person looking at the page, this information would be understood unthinkingly, but computer programs would have to connect several dots to reach the same conclusion.
With structured data, the following markup could be added to the page’s <head> code. (This is a JSON-LD approach. Microdata and RDFa can be used to weave the same information into <body> content):
<script type="application/ld+json"> { "@context" : "http://schema.org", "@type" : "Book", "name" : "Catch-22", "author" : { "@type" : "Person", "name" : "Joseph Heller" }, "datePublished" : "1961-11-10", "review" : { "@type" : "Review", "author" : { "@type" : "Person", "name" : "Joe Bloggs" }, "reviewRating" : { "@type" : "Rating", "ratingValue" : "2", "worstRating" : "0", "bestRating" : "5" }, "reviewBody" : "A disaster. The worst book I've ever read, and I've read The Da Vinci Code." } } </script>
This sets in stone that the page is about Catch-22, a novel by Joseph Heller published on November 10th, 1961. The reviewer has been identified, as has the parameters of the scoring system. Different schemas can be combined (or tiered) to describe different things. For example, through tagging of this sort, you could make clear a page is the event listing for an open-air film screening, and the film in question is The Life Aquatic with Steve Zissou by Wes Anderson.
Recommended reading: Better Research, Better Design, Better Results
Why Does It Matter?
Ok, wonderful. I can label my website up to its eyeballs and it will look exactly the same, but what are the benefits? To my mind, there are two main benefits to including structured data in websites:
It makes search engine’s jobs much easier. They can index content more accurately, which in turn means they can present it more richly.
It helps web content to be more thorough and useful. Structured data gives you a ‘computer perspective’ on content. Quality content is fabulous. Quality content thoroughly tagged is the stuff of dreams.
You know when you see snazzy search results that include star ratings? That’s structured data. Rich snippets of film reviews? Structured data. When a selection of recipes appear, ingredients, preparation time and all? You guessed it. Dig into the code of any of these pages and you’ll find the markup somewhere. Search engines reward sites using structured data because it tells them exactly what they’re dealing with.
(Large preview)
Examine the code on the websites featured above and sure enough, structured data is there. (Large preview)
It’s not just search either, to be clear. That’s a big part of it but it’s not the whole deal. Structured data is primarily about tagging and organizing content. Rich search results are just one way for said content to be used. Google Dataset Search uses Schema.org/Dataset markup, for example.
Below are a handful of examples of structured data being useful:
There are thousands more. Like, literally. Schema.org even fast-tracked the release of markup for Covid-19 recently. It’s an ever-growing library.
In many respects, structured data is a branch of the Semantic Web, which strives for a fully machine-readable Internet. It gives you a machine-readable perspective on web content that (when properly implemented) feeds back into richer functionality for people.
As such, just about anyone with a website would benefit from knowing what structured data is and how it works. According to W3Techs, only 29.6% of websites use JSON-LD, and 43.2% don’t use any structured data formats at all. There’s no obligation, of course. Not everyone cares about SEO Company or being machine-readable. On the flip side, for those who do there’s currently a big opportunity to one-up rival sites.
In the same way that HTML forces you to think about how content is organized, structured data gets you thinking about the substance. It makes you more thorough. Whatever your website is about, if you comb through the relevant schema documentation you’ll almost certainly spot details that you didn’t think to include beforehand.
As humans, it is easy to take for granted the connections between information. Search engines and computer programs are smart, but they’re not that smart. Not yet. Structured data translates content into terms they can understand. This, in turn, allows them to deliver richer experiences.
Resources And Further Reading
“The Beginner’s Guide To Structured Data For SEO: A Two-Part Series,” Bridget Randolph, Moz
“What Is Schema Markup And Why It’s Important For SEO,” Chuck Price, Search Engine Journal
“What Is Schema? Beginner‘s Guide To Structured Data,” Luke Harsel, SEMrush
“JSON-LD: Building Meaningful Data APIs,” Benjamin Young, Rollout Blog
“Understand How Structured Data Works,” Google Search for Developers
“Marking Up Your Site With Structured Data,” Bing
Incorporating Structured Data Into Website Design
Weaving structured data into a website isn’t as straightforward as, say, changing a meta title. It’s the data DNA of your web content. If you want to implement it properly, then you need to be willing to get into the weeds — at least a little bit. Below are a few simple steps developers can take to weave structured data into the design process.
Note: I personally subscribe to a holistic approach to design, where design and substance go hand in hand. Juggling a bunch of disciplines is nothing new to web design, this is just another one, and if it’s incorporated well it can strengthen other elements around it. Think of it as an enhancement to your site’s engine. The car may not look all that different but it handles a hell of a lot better.
Start With A Concept
I’ll use myself as an example. For five years, two friends and I have been reviewing an album a week as a hobby (with others stepping in from time to time). Our sneering, insufferable prose is currently housed in a WordPress site, which — under my well-meaning but altogether ignorant care — had grown into a Frankenstein’s monster of plugins.
We are in the process of redesigning the site which (among other things) has entailed bringing structured data into the core design. Here, as with any other project, the first thing to do is establish what your content is about. The better you answer this question, the easier everything that follows will be.
In our case, these are the essentials:
We review music albums;
Each review has three reviewers who each write a summary by choosing up to three favorite tracks and assigning a personal score out of ten;
These three scores are combined into a final score out of 30;
From the three summaries, a passage is chosen to serve as an ‘at-a-glance’ roundup of all our thoughts.
Some of this may sound a bit specific or even a bit arbitrary (because it is), but you’d be surprised how much of it can be woven together using structured data.
Below is a mockup of what the revamped review pages will look like, and the information that can be translated into schema markup:
Even the most sprawling content is packed full of information just waiting to be tagged and structured. (Large preview)
There’s no trick to this process. I know what the content is about, so I know where to look in the documentation. In this case, I go to Schema.org/MusicAlbum and am met with all manner of potential properties, including:
albumReleaseType
byArtist
genre
producer
datePublished
recordedAt
There are dozens; some exclusive to MusicAlbum, others falling under the larger umbrella of CreativeWork. Digging deeper into the documentation, I find that the markup can connect to MusicBrainz, a music metadata encyclopedia. The same process unfolds when I go to the Review documentation.
From that one simple page, the following information can be gleaned and organized:
<script type="application/ld+json"> { "@context": "http://schema.org/", "@type": "Review", "reviewBody": "Whereas My Love is Cool was guilty of trying too hard no such thing can be said of Visions. The riffs roar and the melodies soar, with the band playing beautifully to Ellie Rowsell's strengths.", "datePublished": "October 4, 2017", "author": [{ "@type": "Person", "name": "André Dack" }, { "@type": "Person", "name": "Frederick O'Brien" }, { "@type": "Person", "name": "Marcus Lawrence" }], "itemReviewed": { "@type": "MusicAlbum", "@id": "https://musicbrainz.org/release-group/7f231c61-20b2-49d6-ac66-1cacc4cc775f", "byArtist": { "@type": "MusicGroup", "name": "Wolf Alice", "@id": "https://musicbrainz.org/artist/3547f34a-db02-4ab7-b4a0-380e1ef951a9" }, "image": "https://lesoreillescurieuses.files.wordpress.com/2017/10/a1320370042_10.jpg", "albumProductionType": "http://schema.org/StudioAlbum", "albumReleaseType": "http://schema.org/AlbumRelease", "name": "Visions of a Life", "numTracks": "12", "datePublished": "September 29, 2017" }, "reviewRating": { "@type": "Rating", "ratingValue": 27, "worstRating": 0, "bestRating": 30 } } </script>
And honestly, I may yet add a lot more. Initially, I found the things that are already part of a review page’s structures (i.e. artist, album name, overall score) but then new questions began to present themselves. What could be clearer? What could I add?
This should obviously be counterbalanced by questions of what’s unnecessary. Just because you can do something doesn’t mean that you should. There is such a thing as ‘too much information’. Still, sometimes a bit more detail can really take a page up a notch.
Familiarize Yourself With Schema
There’s no way around it; the best way to get the ball rolling is to immerse yourself in the documentation. There are tools that implement it for you (more on those below), but you’ll get more out of the markup if you have a proper sense of how it works.
Trawl through the Schema.org documentation. Whoever you are and whatever your website’s for, the odds are that there are plenty of relevant schemas. The site is very good with examples, so it needn’t remain theoretical.
The step beyond that, of course, is to find rich search results you would like to emulate, visiting the page, and using browser dev tools to look at what they’re doing. They are often excellent examples of websites that know their content inside out. You can also feed code snippets or URLs into Google’s Structured Data Markup Helper, which then generates appropriate schema.
Tools like Google’’s Structured Data Markup Helper are excellent for getting to grips with how structured data works. (Large preview)
The fundamentals are actually very simple. Once you get your head around them, it’s the breadth of options that take time to explore and play around with. You don’t want to be that person who gets to the end of a design process, looks into schema options, and starts second-guessing everything that’s been done.
Ask The Right Questions
Now that you’re armed with your wealth of structured data knowledge, you’re better positioned to lay the foundations for a strong website. Structured data rides a fairly unique line. In the immediate sense, it exists ‘under the hood’ and is there for the benefit of computers. At the same time, it can enable richer experiences for the user.
Therefore, it pays to look at structured data from both a technical and user perspective. How can structured data help my website be better understood? What other resources, online databases, or hardware (e.g. smart speakers) might be interested in what you’re doing? What options appear in the documentation that I hadn’t accounted for? Do I want to add them?
It is especially important to identify recurring types of content. It’s safe to say a blog can expect lots of blog posts over time, so incorporating structured data into post templates will yield the most results. The example I gave above is all well and good on its own, but there’s no reason why the markup process can’t be automated. That’s the plan for us.
Consider also the ways that people might find your content. If there are opportunities to, say, highlight a snippet of copy for use in voice search, do it. It’s that, or leave it to search engines to work it out for themselves. No-one knows your content better than you do, so make use of that understanding with descriptive markup.
You don’t need to guess how content will be understood with structured data. With tools like Google’s Rich Results Tester, you can see exactly how it gives content form and meaning that might otherwise have been overlooked.
Resources And Further Reading
Quality Content Deserves Quality Markup
You’ll find no greater advocate of great content than me. The SEO Company industry loses its collective mind whenever Google rolls out a major search update. The response to the hysteria is always the same: make quality content. To that I add: mark it up properly.
Familiarize yourself with the documentation and be clear on what your site is about. Every piece of information you tag makes it that much easier for it to be indexed and shared with the right people.
Whether you’re a Google devotee or a DuckDuckGo convert, the spirit remains the same. It’s not about ranking so much as it is about making websites as good as possible. Accommodating structured data will make other aspects of your website better.
You don’t need to trust tech to understand what your content is about — you can tell it. From reviews to recipes to audio search, developers can add a whole new level of sophistication to their content.
The heart and soul of optimizing a website for search have never changed: produce great content and make it as clear as possible what it is and why it’s useful. Structured data is another tool for that purpose, so use it.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/baking-structured-data-into-the-design-process/ source https://scpie.tumblr.com/post/614974130235785216
0 notes
Text
Baking Structured Data Into The Design Process
About The Author
Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable … More about Frederick …
Retrofitting search engine optimization only gets you so far. As metadata gets smarter, it’s more important than ever to build it into the design process from the start.
search engine optimization (SEO) is essential for almost every kind of website, but its finer points remain something of a specialty. Even today SEO Company is often treated as something that can be tacked on after the fact. It can up to a point, but it really shouldn’t be. Search engines get smarter every day and there are ways for websites to be smarter too.
The foundations of SEO Company are the same as they’ve always been: great content clearly labeled will win the day sooner or later — regardless of how many people try to game the system. The thing is, those labels are far more sophisticated than they used to be. Meta titles, image alt text, and backlinks are important, but in 2020, they’re also fairly primitive. There is another tier of metadata that only a fraction of sites are currently using: structured data.
All search engines share the same purpose: to organize the web’s content and deliver the most relevant, useful results possible to search queries. How they achieve this has changed enormously since the days of Lycos and Ask Jeeves. Google alone uses more than 200 ranking factors, and those are just the ones we know about.
SEO Company is a huge field nowadays, and I put it to you that structured data is a really, really important factor to understand and implement in the coming years. It doesn’t just improve your chances of ranking highly for relevant queries. More importantly, it helps make your websites better — opening it up to all sorts of useful web experiences.
Recommended reading: Where Does SEO Belong In Your Web Design Process?
What Is Structured Data?
Structured data is a way of labeling content on web pages. Using vocabulary from Schema.org, it removes much of the ambiguity from SEO Company. Instead of trusting the likes of Google, Bing, Baidu, and DuckDuckGo to work out what your content is about, you tell them. It’s the difference between a search engine guessing what a page is about and knowing for sure.
As Schema.org puts it:
By adding additional tags to the HTML of your web pages — tags that say, “Hey search engine, this information describes this specific movie, or place, or person, or video” — you can help search engines and other applications better understand your content and display it in a useful, relevant way.
Schema.org launched in 2011, a project shared by Google, Microsoft, Yahoo, and Yandex. In other words, it’s a ‘bipartisan’ effort — if you like. The markup transcends any one search engine. In Schema.org’s own words,
“A shared vocabulary makes it easier for webmasters and developers to decide on a schema and get the maximum benefit for their efforts.”
It is in many respects a more expansive cousin of microformats (launched around 2005) which embed semantics and structured data in HTML, mainly for the benefit of search engines and aggregators. Although microformats are currently still supported, the ‘official’ nature of the Schema.org library makes it a safer bet for longevity.
JSON for Linked Data (JSON-LD) has emerged as the dominant underlying standard for structured data, although Microdata and RDFa are also supported and serve the same purpose. Schema.org provides examples for each type depending on what you’re most comfortable with.
As an example, let’s say Joe Bloggs writes a review of Joseph Heller’s 1961 novel Catch-22 and publishes it on his blog. Sadly, Bloggs has poor taste and gives it two out of five stars. For a person looking at the page, this information would be understood unthinkingly, but computer programs would have to connect several dots to reach the same conclusion.
With structured data, the following markup could be added to the page’s <head> code. (This is a JSON-LD approach. Microdata and RDFa can be used to weave the same information into <body> content):
<script type="application/ld+json"> { "@context" : "http://schema.org", "@type" : "Book", "name" : "Catch-22", "author" : { "@type" : "Person", "name" : "Joseph Heller" }, "datePublished" : "1961-11-10", "review" : { "@type" : "Review", "author" : { "@type" : "Person", "name" : "Joe Bloggs" }, "reviewRating" : { "@type" : "Rating", "ratingValue" : "2", "worstRating" : "0", "bestRating" : "5" }, "reviewBody" : "A disaster. The worst book I've ever read, and I've read The Da Vinci Code." } } </script>
This sets in stone that the page is about Catch-22, a novel by Joseph Heller published on November 10th, 1961. The reviewer has been identified, as has the parameters of the scoring system. Different schemas can be combined (or tiered) to describe different things. For example, through tagging of this sort, you could make clear a page is the event listing for an open-air film screening, and the film in question is The Life Aquatic with Steve Zissou by Wes Anderson.
Recommended reading: Better Research, Better Design, Better Results
Why Does It Matter?
Ok, wonderful. I can label my website up to its eyeballs and it will look exactly the same, but what are the benefits? To my mind, there are two main benefits to including structured data in websites:
It makes search engine’s jobs much easier. They can index content more accurately, which in turn means they can present it more richly.
It helps web content to be more thorough and useful. Structured data gives you a ‘computer perspective’ on content. Quality content is fabulous. Quality content thoroughly tagged is the stuff of dreams.
You know when you see snazzy search results that include star ratings? That’s structured data. Rich snippets of film reviews? Structured data. When a selection of recipes appear, ingredients, preparation time and all? You guessed it. Dig into the code of any of these pages and you’ll find the markup somewhere. Search engines reward sites using structured data because it tells them exactly what they’re dealing with.
(Large preview)
Examine the code on the websites featured above and sure enough, structured data is there. (Large preview)
It’s not just search either, to be clear. That’s a big part of it but it’s not the whole deal. Structured data is primarily about tagging and organizing content. Rich search results are just one way for said content to be used. Google Dataset Search uses Schema.org/Dataset markup, for example.
Below are a handful of examples of structured data being useful:
There are thousands more. Like, literally. Schema.org even fast-tracked the release of markup for Covid-19 recently. It’s an ever-growing library.
In many respects, structured data is a branch of the Semantic Web, which strives for a fully machine-readable Internet. It gives you a machine-readable perspective on web content that (when properly implemented) feeds back into richer functionality for people.
As such, just about anyone with a website would benefit from knowing what structured data is and how it works. According to W3Techs, only 29.6% of websites use JSON-LD, and 43.2% don’t use any structured data formats at all. There’s no obligation, of course. Not everyone cares about SEO Company or being machine-readable. On the flip side, for those who do there’s currently a big opportunity to one-up rival sites.
In the same way that HTML forces you to think about how content is organized, structured data gets you thinking about the substance. It makes you more thorough. Whatever your website is about, if you comb through the relevant schema documentation you’ll almost certainly spot details that you didn’t think to include beforehand.
As humans, it is easy to take for granted the connections between information. Search engines and computer programs are smart, but they’re not that smart. Not yet. Structured data translates content into terms they can understand. This, in turn, allows them to deliver richer experiences.
Resources And Further Reading
“The Beginner’s Guide To Structured Data For SEO: A Two-Part Series,” Bridget Randolph, Moz
“What Is Schema Markup And Why It’s Important For SEO,” Chuck Price, Search Engine Journal
“What Is Schema? Beginner‘s Guide To Structured Data,” Luke Harsel, SEMrush
“JSON-LD: Building Meaningful Data APIs,” Benjamin Young, Rollout Blog
“Understand How Structured Data Works,” Google Search for Developers
“Marking Up Your Site With Structured Data,” Bing
Incorporating Structured Data Into Website Design
Weaving structured data into a website isn’t as straightforward as, say, changing a meta title. It’s the data DNA of your web content. If you want to implement it properly, then you need to be willing to get into the weeds — at least a little bit. Below are a few simple steps developers can take to weave structured data into the design process.
Note: I personally subscribe to a holistic approach to design, where design and substance go hand in hand. Juggling a bunch of disciplines is nothing new to web design, this is just another one, and if it’s incorporated well it can strengthen other elements around it. Think of it as an enhancement to your site’s engine. The car may not look all that different but it handles a hell of a lot better.
Start With A Concept
I’ll use myself as an example. For five years, two friends and I have been reviewing an album a week as a hobby (with others stepping in from time to time). Our sneering, insufferable prose is currently housed in a WordPress site, which — under my well-meaning but altogether ignorant care — had grown into a Frankenstein’s monster of plugins.
We are in the process of redesigning the site which (among other things) has entailed bringing structured data into the core design. Here, as with any other project, the first thing to do is establish what your content is about. The better you answer this question, the easier everything that follows will be.
In our case, these are the essentials:
We review music albums;
Each review has three reviewers who each write a summary by choosing up to three favorite tracks and assigning a personal score out of ten;
These three scores are combined into a final score out of 30;
From the three summaries, a passage is chosen to serve as an ‘at-a-glance’ roundup of all our thoughts.
Some of this may sound a bit specific or even a bit arbitrary (because it is), but you’d be surprised how much of it can be woven together using structured data.
Below is a mockup of what the revamped review pages will look like, and the information that can be translated into schema markup:
Even the most sprawling content is packed full of information just waiting to be tagged and structured. (Large preview)
There’s no trick to this process. I know what the content is about, so I know where to look in the documentation. In this case, I go to Schema.org/MusicAlbum and am met with all manner of potential properties, including:
albumReleaseType
byArtist
genre
producer
datePublished
recordedAt
There are dozens; some exclusive to MusicAlbum, others falling under the larger umbrella of CreativeWork. Digging deeper into the documentation, I find that the markup can connect to MusicBrainz, a music metadata encyclopedia. The same process unfolds when I go to the Review documentation.
From that one simple page, the following information can be gleaned and organized:
<script type="application/ld+json"> { "@context": "http://schema.org/", "@type": "Review", "reviewBody": "Whereas My Love is Cool was guilty of trying too hard no such thing can be said of Visions. The riffs roar and the melodies soar, with the band playing beautifully to Ellie Rowsell's strengths.", "datePublished": "October 4, 2017", "author": [{ "@type": "Person", "name": "André Dack" }, { "@type": "Person", "name": "Frederick O'Brien" }, { "@type": "Person", "name": "Marcus Lawrence" }], "itemReviewed": { "@type": "MusicAlbum", "@id": "https://musicbrainz.org/release-group/7f231c61-20b2-49d6-ac66-1cacc4cc775f", "byArtist": { "@type": "MusicGroup", "name": "Wolf Alice", "@id": "https://musicbrainz.org/artist/3547f34a-db02-4ab7-b4a0-380e1ef951a9" }, "image": "https://lesoreillescurieuses.files.wordpress.com/2017/10/a1320370042_10.jpg", "albumProductionType": "http://schema.org/StudioAlbum", "albumReleaseType": "http://schema.org/AlbumRelease", "name": "Visions of a Life", "numTracks": "12", "datePublished": "September 29, 2017" }, "reviewRating": { "@type": "Rating", "ratingValue": 27, "worstRating": 0, "bestRating": 30 } } </script>
And honestly, I may yet add a lot more. Initially, I found the things that are already part of a review page’s structures (i.e. artist, album name, overall score) but then new questions began to present themselves. What could be clearer? What could I add?
This should obviously be counterbalanced by questions of what’s unnecessary. Just because you can do something doesn’t mean that you should. There is such a thing as ‘too much information’. Still, sometimes a bit more detail can really take a page up a notch.
Familiarize Yourself With Schema
There’s no way around it; the best way to get the ball rolling is to immerse yourself in the documentation. There are tools that implement it for you (more on those below), but you’ll get more out of the markup if you have a proper sense of how it works.
Trawl through the Schema.org documentation. Whoever you are and whatever your website’s for, the odds are that there are plenty of relevant schemas. The site is very good with examples, so it needn’t remain theoretical.
The step beyond that, of course, is to find rich search results you would like to emulate, visiting the page, and using browser dev tools to look at what they’re doing. They are often excellent examples of websites that know their content inside out. You can also feed code snippets or URLs into Google’s Structured Data Markup Helper, which then generates appropriate schema.
Tools like Google’’s Structured Data Markup Helper are excellent for getting to grips with how structured data works. (Large preview)
The fundamentals are actually very simple. Once you get your head around them, it’s the breadth of options that take time to explore and play around with. You don’t want to be that person who gets to the end of a design process, looks into schema options, and starts second-guessing everything that’s been done.
Ask The Right Questions
Now that you’re armed with your wealth of structured data knowledge, you’re better positioned to lay the foundations for a strong website. Structured data rides a fairly unique line. In the immediate sense, it exists ‘under the hood’ and is there for the benefit of computers. At the same time, it can enable richer experiences for the user.
Therefore, it pays to look at structured data from both a technical and user perspective. How can structured data help my website be better understood? What other resources, online databases, or hardware (e.g. smart speakers) might be interested in what you’re doing? What options appear in the documentation that I hadn’t accounted for? Do I want to add them?
It is especially important to identify recurring types of content. It’s safe to say a blog can expect lots of blog posts over time, so incorporating structured data into post templates will yield the most results. The example I gave above is all well and good on its own, but there’s no reason why the markup process can’t be automated. That’s the plan for us.
Consider also the ways that people might find your content. If there are opportunities to, say, highlight a snippet of copy for use in voice search, do it. It’s that, or leave it to search engines to work it out for themselves. No-one knows your content better than you do, so make use of that understanding with descriptive markup.
You don’t need to guess how content will be understood with structured data. With tools like Google’s Rich Results Tester, you can see exactly how it gives content form and meaning that might otherwise have been overlooked.
Resources And Further Reading
Quality Content Deserves Quality Markup
You’ll find no greater advocate of great content than me. The SEO Company industry loses its collective mind whenever Google rolls out a major search update. The response to the hysteria is always the same: make quality content. To that I add: mark it up properly.
Familiarize yourself with the documentation and be clear on what your site is about. Every piece of information you tag makes it that much easier for it to be indexed and shared with the right people.
Whether you’re a Google devotee or a DuckDuckGo convert, the spirit remains the same. It’s not about ranking so much as it is about making websites as good as possible. Accommodating structured data will make other aspects of your website better.
You don’t need to trust tech to understand what your content is about — you can tell it. From reviews to recipes to audio search, developers can add a whole new level of sophistication to their content.
The heart and soul of optimizing a website for search have never changed: produce great content and make it as clear as possible what it is and why it’s useful. Structured data is another tool for that purpose, so use it.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/baking-structured-data-into-the-design-process/ source https://scpie1.blogspot.com/2020/04/baking-structured-data-into-design.html
0 notes
Text
Baking Structured Data Into The Design Process
About The Author
Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable … More about Frederick …
Retrofitting search engine optimization only gets you so far. As metadata gets smarter, it’s more important than ever to build it into the design process from the start.
search engine optimization (SEO) is essential for almost every kind of website, but its finer points remain something of a specialty. Even today SEO Company is often treated as something that can be tacked on after the fact. It can up to a point, but it really shouldn’t be. Search engines get smarter every day and there are ways for websites to be smarter too.
The foundations of SEO Company are the same as they’ve always been: great content clearly labeled will win the day sooner or later — regardless of how many people try to game the system. The thing is, those labels are far more sophisticated than they used to be. Meta titles, image alt text, and backlinks are important, but in 2020, they’re also fairly primitive. There is another tier of metadata that only a fraction of sites are currently using: structured data.
All search engines share the same purpose: to organize the web’s content and deliver the most relevant, useful results possible to search queries. How they achieve this has changed enormously since the days of Lycos and Ask Jeeves. Google alone uses more than 200 ranking factors, and those are just the ones we know about.
SEO Company is a huge field nowadays, and I put it to you that structured data is a really, really important factor to understand and implement in the coming years. It doesn’t just improve your chances of ranking highly for relevant queries. More importantly, it helps make your websites better — opening it up to all sorts of useful web experiences.
Recommended reading: Where Does SEO Belong In Your Web Design Process?
What Is Structured Data?
Structured data is a way of labeling content on web pages. Using vocabulary from Schema.org, it removes much of the ambiguity from SEO Company. Instead of trusting the likes of Google, Bing, Baidu, and DuckDuckGo to work out what your content is about, you tell them. It’s the difference between a search engine guessing what a page is about and knowing for sure.
As Schema.org puts it:
By adding additional tags to the HTML of your web pages — tags that say, “Hey search engine, this information describes this specific movie, or place, or person, or video” — you can help search engines and other applications better understand your content and display it in a useful, relevant way.
Schema.org launched in 2011, a project shared by Google, Microsoft, Yahoo, and Yandex. In other words, it’s a ‘bipartisan’ effort — if you like. The markup transcends any one search engine. In Schema.org’s own words,
“A shared vocabulary makes it easier for webmasters and developers to decide on a schema and get the maximum benefit for their efforts.”
It is in many respects a more expansive cousin of microformats (launched around 2005) which embed semantics and structured data in HTML, mainly for the benefit of search engines and aggregators. Although microformats are currently still supported, the ‘official’ nature of the Schema.org library makes it a safer bet for longevity.
JSON for Linked Data (JSON-LD) has emerged as the dominant underlying standard for structured data, although Microdata and RDFa are also supported and serve the same purpose. Schema.org provides examples for each type depending on what you’re most comfortable with.
As an example, let’s say Joe Bloggs writes a review of Joseph Heller’s 1961 novel Catch-22 and publishes it on his blog. Sadly, Bloggs has poor taste and gives it two out of five stars. For a person looking at the page, this information would be understood unthinkingly, but computer programs would have to connect several dots to reach the same conclusion.
With structured data, the following markup could be added to the page’s <head> code. (This is a JSON-LD approach. Microdata and RDFa can be used to weave the same information into <body> content):
<script type="application/ld+json"> { "@context" : "http://schema.org", "@type" : "Book", "name" : "Catch-22", "author" : { "@type" : "Person", "name" : "Joseph Heller" }, "datePublished" : "1961-11-10", "review" : { "@type" : "Review", "author" : { "@type" : "Person", "name" : "Joe Bloggs" }, "reviewRating" : { "@type" : "Rating", "ratingValue" : "2", "worstRating" : "0", "bestRating" : "5" }, "reviewBody" : "A disaster. The worst book I've ever read, and I've read The Da Vinci Code." } } </script>
This sets in stone that the page is about Catch-22, a novel by Joseph Heller published on November 10th, 1961. The reviewer has been identified, as has the parameters of the scoring system. Different schemas can be combined (or tiered) to describe different things. For example, through tagging of this sort, you could make clear a page is the event listing for an open-air film screening, and the film in question is The Life Aquatic with Steve Zissou by Wes Anderson.
Recommended reading: Better Research, Better Design, Better Results
Why Does It Matter?
Ok, wonderful. I can label my website up to its eyeballs and it will look exactly the same, but what are the benefits? To my mind, there are two main benefits to including structured data in websites:
It makes search engine’s jobs much easier. They can index content more accurately, which in turn means they can present it more richly.
It helps web content to be more thorough and useful. Structured data gives you a ‘computer perspective’ on content. Quality content is fabulous. Quality content thoroughly tagged is the stuff of dreams.
You know when you see snazzy search results that include star ratings? That’s structured data. Rich snippets of film reviews? Structured data. When a selection of recipes appear, ingredients, preparation time and all? You guessed it. Dig into the code of any of these pages and you’ll find the markup somewhere. Search engines reward sites using structured data because it tells them exactly what they’re dealing with.
(Large preview)
Examine the code on the websites featured above and sure enough, structured data is there. (Large preview)
It’s not just search either, to be clear. That’s a big part of it but it’s not the whole deal. Structured data is primarily about tagging and organizing content. Rich search results are just one way for said content to be used. Google Dataset Search uses Schema.org/Dataset markup, for example.
Below are a handful of examples of structured data being useful:
There are thousands more. Like, literally. Schema.org even fast-tracked the release of markup for Covid-19 recently. It’s an ever-growing library.
In many respects, structured data is a branch of the Semantic Web, which strives for a fully machine-readable Internet. It gives you a machine-readable perspective on web content that (when properly implemented) feeds back into richer functionality for people.
As such, just about anyone with a website would benefit from knowing what structured data is and how it works. According to W3Techs, only 29.6% of websites use JSON-LD, and 43.2% don’t use any structured data formats at all. There’s no obligation, of course. Not everyone cares about SEO Company or being machine-readable. On the flip side, for those who do there’s currently a big opportunity to one-up rival sites.
In the same way that HTML forces you to think about how content is organized, structured data gets you thinking about the substance. It makes you more thorough. Whatever your website is about, if you comb through the relevant schema documentation you’ll almost certainly spot details that you didn’t think to include beforehand.
As humans, it is easy to take for granted the connections between information. Search engines and computer programs are smart, but they’re not that smart. Not yet. Structured data translates content into terms they can understand. This, in turn, allows them to deliver richer experiences.
Resources And Further Reading
“The Beginner’s Guide To Structured Data For SEO: A Two-Part Series,” Bridget Randolph, Moz
“What Is Schema Markup And Why It’s Important For SEO,” Chuck Price, Search Engine Journal
“What Is Schema? Beginner‘s Guide To Structured Data,” Luke Harsel, SEMrush
“JSON-LD: Building Meaningful Data APIs,” Benjamin Young, Rollout Blog
“Understand How Structured Data Works,” Google Search for Developers
“Marking Up Your Site With Structured Data,” Bing
Incorporating Structured Data Into Website Design
Weaving structured data into a website isn’t as straightforward as, say, changing a meta title. It’s the data DNA of your web content. If you want to implement it properly, then you need to be willing to get into the weeds — at least a little bit. Below are a few simple steps developers can take to weave structured data into the design process.
Note: I personally subscribe to a holistic approach to design, where design and substance go hand in hand. Juggling a bunch of disciplines is nothing new to web design, this is just another one, and if it’s incorporated well it can strengthen other elements around it. Think of it as an enhancement to your site’s engine. The car may not look all that different but it handles a hell of a lot better.
Start With A Concept
I’ll use myself as an example. For five years, two friends and I have been reviewing an album a week as a hobby (with others stepping in from time to time). Our sneering, insufferable prose is currently housed in a WordPress site, which — under my well-meaning but altogether ignorant care — had grown into a Frankenstein’s monster of plugins.
We are in the process of redesigning the site which (among other things) has entailed bringing structured data into the core design. Here, as with any other project, the first thing to do is establish what your content is about. The better you answer this question, the easier everything that follows will be.
In our case, these are the essentials:
We review music albums;
Each review has three reviewers who each write a summary by choosing up to three favorite tracks and assigning a personal score out of ten;
These three scores are combined into a final score out of 30;
From the three summaries, a passage is chosen to serve as an ‘at-a-glance’ roundup of all our thoughts.
Some of this may sound a bit specific or even a bit arbitrary (because it is), but you’d be surprised how much of it can be woven together using structured data.
Below is a mockup of what the revamped review pages will look like, and the information that can be translated into schema markup:
Even the most sprawling content is packed full of information just waiting to be tagged and structured. (Large preview)
There’s no trick to this process. I know what the content is about, so I know where to look in the documentation. In this case, I go to Schema.org/MusicAlbum and am met with all manner of potential properties, including:
albumReleaseType
byArtist
genre
producer
datePublished
recordedAt
There are dozens; some exclusive to MusicAlbum, others falling under the larger umbrella of CreativeWork. Digging deeper into the documentation, I find that the markup can connect to MusicBrainz, a music metadata encyclopedia. The same process unfolds when I go to the Review documentation.
From that one simple page, the following information can be gleaned and organized:
<script type="application/ld+json"> { "@context": "http://schema.org/", "@type": "Review", "reviewBody": "Whereas My Love is Cool was guilty of trying too hard no such thing can be said of Visions. The riffs roar and the melodies soar, with the band playing beautifully to Ellie Rowsell's strengths.", "datePublished": "October 4, 2017", "author": [{ "@type": "Person", "name": "André Dack" }, { "@type": "Person", "name": "Frederick O'Brien" }, { "@type": "Person", "name": "Marcus Lawrence" }], "itemReviewed": { "@type": "MusicAlbum", "@id": "https://musicbrainz.org/release-group/7f231c61-20b2-49d6-ac66-1cacc4cc775f", "byArtist": { "@type": "MusicGroup", "name": "Wolf Alice", "@id": "https://musicbrainz.org/artist/3547f34a-db02-4ab7-b4a0-380e1ef951a9" }, "image": "https://lesoreillescurieuses.files.wordpress.com/2017/10/a1320370042_10.jpg", "albumProductionType": "http://schema.org/StudioAlbum", "albumReleaseType": "http://schema.org/AlbumRelease", "name": "Visions of a Life", "numTracks": "12", "datePublished": "September 29, 2017" }, "reviewRating": { "@type": "Rating", "ratingValue": 27, "worstRating": 0, "bestRating": 30 } } </script>
And honestly, I may yet add a lot more. Initially, I found the things that are already part of a review page’s structures (i.e. artist, album name, overall score) but then new questions began to present themselves. What could be clearer? What could I add?
This should obviously be counterbalanced by questions of what’s unnecessary. Just because you can do something doesn’t mean that you should. There is such a thing as ‘too much information’. Still, sometimes a bit more detail can really take a page up a notch.
Familiarize Yourself With Schema
There’s no way around it; the best way to get the ball rolling is to immerse yourself in the documentation. There are tools that implement it for you (more on those below), but you’ll get more out of the markup if you have a proper sense of how it works.
Trawl through the Schema.org documentation. Whoever you are and whatever your website’s for, the odds are that there are plenty of relevant schemas. The site is very good with examples, so it needn’t remain theoretical.
The step beyond that, of course, is to find rich search results you would like to emulate, visiting the page, and using browser dev tools to look at what they’re doing. They are often excellent examples of websites that know their content inside out. You can also feed code snippets or URLs into Google’s Structured Data Markup Helper, which then generates appropriate schema.
Tools like Google’’s Structured Data Markup Helper are excellent for getting to grips with how structured data works. (Large preview)
The fundamentals are actually very simple. Once you get your head around them, it’s the breadth of options that take time to explore and play around with. You don’t want to be that person who gets to the end of a design process, looks into schema options, and starts second-guessing everything that’s been done.
Ask The Right Questions
Now that you’re armed with your wealth of structured data knowledge, you’re better positioned to lay the foundations for a strong website. Structured data rides a fairly unique line. In the immediate sense, it exists ‘under the hood’ and is there for the benefit of computers. At the same time, it can enable richer experiences for the user.
Therefore, it pays to look at structured data from both a technical and user perspective. How can structured data help my website be better understood? What other resources, online databases, or hardware (e.g. smart speakers) might be interested in what you’re doing? What options appear in the documentation that I hadn’t accounted for? Do I want to add them?
It is especially important to identify recurring types of content. It’s safe to say a blog can expect lots of blog posts over time, so incorporating structured data into post templates will yield the most results. The example I gave above is all well and good on its own, but there’s no reason why the markup process can’t be automated. That’s the plan for us.
Consider also the ways that people might find your content. If there are opportunities to, say, highlight a snippet of copy for use in voice search, do it. It’s that, or leave it to search engines to work it out for themselves. No-one knows your content better than you do, so make use of that understanding with descriptive markup.
You don’t need to guess how content will be understood with structured data. With tools like Google’s Rich Results Tester, you can see exactly how it gives content form and meaning that might otherwise have been overlooked.
Resources And Further Reading
Quality Content Deserves Quality Markup
You’ll find no greater advocate of great content than me. The SEO Company industry loses its collective mind whenever Google rolls out a major search update. The response to the hysteria is always the same: make quality content. To that I add: mark it up properly.
Familiarize yourself with the documentation and be clear on what your site is about. Every piece of information you tag makes it that much easier for it to be indexed and shared with the right people.
Whether you’re a Google devotee or a DuckDuckGo convert, the spirit remains the same. It’s not about ranking so much as it is about making websites as good as possible. Accommodating structured data will make other aspects of your website better.
You don’t need to trust tech to understand what your content is about — you can tell it. From reviews to recipes to audio search, developers can add a whole new level of sophistication to their content.
The heart and soul of optimizing a website for search have never changed: produce great content and make it as clear as possible what it is and why it’s useful. Structured data is another tool for that purpose, so use it.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/baking-structured-data-into-the-design-process/
0 notes
Text
Mangools review: SEO Tools that will skyrocket your online presence « 3nions
Do you want to improve your search engine optimization?
Well… of course you want and here the online SEO tools can help you a lot.
SEO tools such as Keyword Researcher, Organic traffic analyzer, etc can really help any business to find out their target audience and reach out to them easily.
But do you have any idea which SEO tools can provide you maximum profit? Or which SEO tools come under your budget?
Not to worry at all as we will be providing a completely unbiased review about Mangools and its 4 amazing SEO tools.
What exactly is Mangools?
Mangools came into existence in the year 2014 with its one SEO tool names as KWFinder. After the success of KWFinder, Mangools moved on from one SEO tool to four growth hacking SEO tools.
They include-
How is Mangools different from other SEO tools?
Doesn’t make a hole in the pocket: Mangools won’t cost you thousands of dollar in monthly agency plans subscription. Over that, many of the users got 3x- 4x of their investments in terms of website traffic, leads, sales, etc. we will discuss more this in the pricing plan below
Variety of tools: Variety is a must in everyone’s life. For example, you won’t always prefer one dish in a restaurant. You would surely try to get the variety of food just because you want to try them all. Mangools is one such SEO restaurant providing you different tools in terms of dishes.
Complete User Guide available: Every tool requires a User manual in order to understand the proper functioning of that tool. Also, you need to optimize your site with all the SEO factors in order to be safe from google updates. This is where Mangools is different as the marketing guru of Mangools Maros Kortis has written a complete guide for every SEO tool.
Mangools review: 4 SEO Tools that will skyrocket your online presence
1. KWFinder: the one solution to your Keyword Research
Keyword research is must in online marketing in order to find leads and increase sales. KWFinder simplifies the whole keyword research process with ease and helps you to find out the right keywords to target.
The thing that makes KWFinder different from others is its user interface which provides a number of metrics like majestic and Moz metrics that gives an individual a clear idea about the keywords difficulty.
KWFinder user interface has got 3 different options that can help in advance researching
Suggestions It is the default search option where you get the number of suggestion based on your searched keyword. All the suggestions are exported from Google keyword planner and KWFinder’s pre-programmed algorithm. This feature helps in hunting unique targeted keywords.
Autocomplete Ever thought that your targeted keyword would be an incomplete version of a more targeted keyword? Autocomplete helps to find out those keywords that can be further used to increase online presence. All the results are derived from Google API so people are actually for the same.
Questions
Many times when we don’t understand anything, we often google about the same. This means that there are loads of keywords in form of questions that needs to be answered. Ever thought about it? The “questions” option was therefore developed by Mangools where you can get an ample amount of questions related to your keyword.
Apart from just searching for new keywords, it has also got an awesome filter features that help out to find gems.
You can easily sort and filter keywords based on search volume, PPC, CPC, etc easily.
Why should you opt for KWFinder?
Saves time as all the research and analysis part is done in one place.
Able to generate localized keywords.
Gives the most accurate keyword suggestion and information.
Filter features help in finding keywords based on your needs.
The questions feature help to develop blog ideas.
2. SERPChecker: The complete solution to stalk your competition.
Did you know by stalking your competition, you can eventually reduce your overall research and planning work?
SERPChecker can help you in your stalking thing very professionally :p
To get started, just enter a keyword and the magic of SERPChecker will provide you with all the pages of your competitor that are ranking for it.
Want access to SERPChecker? Get a Mangools account now.
For example, you are working on the keyword ‘SERPChecker review” and you want to know the overall competition. Simply follow the steps.
Step 1: Enter “SERPChecker review” in the search bar.
Step 2: Select the “Analyse SERP” button.
Just wait till the screen loads and the next thing you see is a page with all the necessary information like the difficulty (the closer to 100, the harder it is), DA, TF, etc that you were looking for your keyword.
Isn’t it amazing?
The above steps were on “how can you use the SERP checker tool” to analyze any keyword.
Why should you opt for SERPChecker?
Filter available for localized results.
Can easily differentiate between mobile and desktop searches.
Organic results can be evaluated quickly.
Makes in-depth research work easier.
3. SERPWatcher: Because rank tracking is a must!
After completely finishing keyword research and stalking out the competitors, you must have started your SEO work.
But have you started tracking your ranks on SERPS?
SERPWatcher comes handy in such situation as it is a complete package for tracking your web pages and keywords. By simply adding a tracking in your dashboard, you will have access to SERPWatcher.
Here’s how the process goes on…
Step 1: Create a new tracking in the dashboard for your website. This can be easily done by just selecting “Add new tracking” in the right corner. You can then add domain names and keywords.
Step 2: Select “start tracking” so that SERPWatcher immediately starts tracking all the keywords.
Step 3: All the data will be available in SERPWatcher in few seconds.
You will notice something as Dominance Index, it mainly tells you about the organic traffic based on your current ranking. Following is calculated based on-
Search volume according to Google Keyword Planner.
SERP position.
The number of keywords tracked.
Average CTR.
Aggregate value if you ranked for all keywords.
You can also see all metrics related to all of your keywords that you added in the dashboard. This way you can easily decide which of your pages are performing well and which aren’t.
Why should you opt for SERPWatcher?
Mobile and desktop results can be tracked simultaneously.
Keywords can be organized with the help of tags.
You can get a report in the form of CSV format.
Daily updates will be provided about your keyword position.
Tracking can be made demographically.
The daily ranking, visits, dominance index charts can be found easily for any keyword.
4. LinkMiner: Backlinks too help with SEO.
LinkMiner helps to find out backlinks data of the competition simply by providing a number of metrics, embedded features, filtering previews that can legitimately help in online search appearance.
How to get started with LinkMiner
For start, arrange a valid domain or any URL in the search bar and select “Find Backlinks”.
You can either select “multiple links” option or “one domain per link” option according to your needs.
Link miner then displays-
Lost backlinks (deleted ones)
New backlinks (freshly made backlinks from last 30 days)
All backlinks (the mixture of lost and new from last 90 days)
LinkMiner gives more attention to TF (trust flow) and CF (citation flow) for link evaluation. Along with that, it takes these 4 metrics also into consideration-
AR: Alexa rank
FB: Number of Facebook social signals
LS: Link strength
EL: Number of external links
Why should you opt for LinkMiner?
Shows websites that have backlinks directly in the tool.
Easy to use filter helps to categorize the links more efficiently.
You can save and mark favorite your top backlinks.
Now that you have read so much about Mangools, we have a breaking news for you all.
Mangools is coming up with its new SEO tools known as SiteProfiler, let’s have a clear idea about how can that help your business.
5. Siteprofiler: Analysing website can provide you with gems!
SiteProfiler can save your time a lot by analyzing your whole website and providing all the necessary SEO metrics like organic keywords, Ranking, backlinks, Domain quality, Da, Pa, Trust flow, Traffic, competitors, etc
You can also use this amazing tool to analyze keywords of your competitors and pick few of them for your own.
Why should you opt for SiteProfiler
Great tool to get a complete report of the whole website with all the necessary metrics
Easy User Interface
Organic ranking and keyword can be easily obtained
PS: it is currently in BETA stage so if you want to use the best website analyzing tool, its now or never. Don’t think so much, just sign up and get started!!!
That was an overall review of 5 amazing SEO tools offered by mangools. Let’s have a look at their pricing structure.
Mangools pricing structure
Mangools free account
Well, to buy something, one often does takes a trial of the product. Mangools also offers a free account to its users where they can simply register and get access to their tools for 50 keywords and 5 lookups for 24 hours.
Mangools paid account
If you are damn serious about your website’s SEO, you should definitely opt for Mangools paid plan.
Mangools offers 3 different plans: Basic, Premium, and Agency.
1. Mangools Basic plan: Mangools Basic plan costs $29.9 (annually billed $358). It is suitable for bloggers mostly into SEO stuff. Below you will find all the features that are included in this plan:
2. Mangools Premium plan: Mangools Premium plan costs $39.9 (annually billed $478.8) It is perfect for full-time bloggers, entrepreneurs and SEO experts. Here are Mangools Premium features. Below you will find all the features that are included in this plan:
3. Mangools Agency plan: Mangools Agency plan costs $79.9 (annually billed $958.8), It is a complete package suitable for those who are running agencies. Below you will find all the features that are included in this plan:
So that was an in-depth review on Mangools and its 4 amazing and beneficial SEO tools that can surely help in skyrocketing your online presence. IF you have any doubts regarding mangools, you can surely drop down a comment and we will be happy to help you.
Source
https://www.3nions.com/mangools-review-seo-tools-that-will-skyrocket-your-online-presence/
0 notes
Text
5 Types of SEO Audits for B2B Organizations
The term "SEO Audit" can be quite ambiguous. I have actually seen all types of SEO audits from associates and rivals, using a variety of assessments and locations of focus. While some of these SEO audits might certainly be inspected, lots of have been rather thorough and impactful.Of course, B2B
marketers have varying expectations of an SEO audit also. While practically every audit addresses SEO best practices, more difficult concerns emerge when tackling the expectation of strategic direction.To that extent
, SEO auditing can cannot provide the right expectations or fizzle with client objectives.Fortunately, many companies recognize there are basic issues that can be uncovered in SEO auditing, despite scope. By resolving these problems, B2B organizations may realize significant long-term gains in natural search performance.But, B2B marketers do not constantly have to invest considerable dollars in SEO auditing at the start. There are numerous specific kinds of SEO audits that require little expense OR participation from a supplier or agency.Before your organization makes a choice to engage a vendor in their SEO programming(even KoMarketing!) , think about these five types of SEO
audits, to determine if there are easy wins and instant opportunities to improve performance.The Replicate Content Audit Among the first aspects of a site to examine for SEO is associated with website copy. The key is to ensure the very same copy is not accessible from multiple web addresses.As highlighted in current< a href=http://www.komarketingassociates.com/industry-news/duplicate-content-and-missing-alt-attributes-named-most-common-on-site-seo-issues-3159/ >
protection here at KoMarketing, confidential data gathered from 100,000 sites by SEMrush revealed that HALF of sites deal with issues with duplicate content.How can your organization ensure you are on the right side of that HALF benchmark?Here are a few suggestions.In current posts from my associate
Ryan Young, he information two approaches for recognizing and avoiding replicate content issues.First, think about performing a"[ keyword] site: domain.com"search to reveal whether duplicate content might exist on the website. Here's an example of the existing assets we have on KoMarketing for the expression"B2B content marketing".
With this info in hand, we understand to prevent reproducing this material in future material advancement.
2nd, consider using the Little SEO ToolsPlagiarism Checker. This complimentary resource scans material (as much as 1,000 words per search )and notifies users of any total sentences that currently exist someplace on the internet. A score of 100% implies your material is original and safe from replicate content issues.Screaming Frog is another resource that can be used to identify replicate copy and tagging across web pages. Their user guide supplies particular guidelines and referrals for determining problems with replicate content.Finally, guarantee that web pages integrate a REL canonical tag.
The combination of this tag assists mitigate problems with unintentional indexing of campaign-tracking info or session ID's in web addresses. Shouting Frog can likewise be used to investigate the inclusion of this tag in websites templates.The Google Browse Analytics Audit Google Search Console's< a href=https://support.google.com/webmasters/answer/6155685?hl=en > Browse Analytics report offers effective information for understanding what keywords and phrases drive traffic and exposure for an organization's individual web pages in organic search.By filtering information to evaluate specific page performance, B2B marketers can get a better understanding of the success (or chance)essential properties have in natural search.How do you do this?Here are a few recommended steps: First, I recommend examining all available information points( Clicks, Impressions, CTR, and Average
Position). 2nd, pick the"Pages"alternative, filtering by specific web address criteria (such as "/ blog/" or "services.html" as an example).
From here, I tend to see the page (web address) report initially, diving into private pages to examine key expressions driving traffic and visibility.This info
is necessary in supplying B2B online marketers direction for taking on individual keyword strategies and improving the objectives of material marketing assets.In the example listed below, I'm revealing a screenshot of a filtered view of the exact same search analytics account, highlighting a sub-section of possessions. < a href=http://www.komarketingassociates.com/images/2016/11/gsc-02.png rel=lightbox [35194] > Based upon summary information alone, we can infer that all of the websites connected with this sub-section require additional SEO attention. This section's overall CTR and position in organic search are well listed below the average performance of the site's properties in general.We now have a location of focus in continuous SEO program development.The Inbound Link Audit How effective has your organization been at getting inbound connect to specific website properties and
material marketing initiatives?The Google Browse Console incoming link report offers free insight into what domains and web pages link to your organization's site and specific site properties
(web addresses). Looking for more SEO-centric link acquisition information?We utilize a mix of industrial SEO resources to further evaluate inbound link efficiency. All these resources have basic, free choices also, despite the fact that available information readily available is really limited.Moz Open Site Explorer-- I suggest reviewing
the "Just Discovered" link info in coordination with general incoming link info. SEMRush Backlink Audit Tool-- SEMRush goes a bit further in link reporting to
supply a link assessment on those that may be considered "poisonous" and negatively impact SEO performance in the long run.
Buzzsumo Inbound Link Report-- Buzzsumo's current collaboration with Majestic SEO supplies B2B marketers with an in-line view of incoming links per content marketing asset, similar to Google Browse Console's report.The trend in SEO auditing with these tools is that they examine link acquisition in general, and in coordination with content marketing assets, rather than direct certification of "tactical efficiency". It's my opinion that the majority of B2B online marketers have little time to straight get links from third celebration sites, a minimum of on a constant, continuous basis. Hence, it is vital that they can organically get these links through important content development(onsite and offsite ). All the resources above aid audit the efficiency of these efforts as well as the link building successes of competitors, direct and indirect.The Social network Exposure Audit While social media may offer more of an indirect benefit to SEO efficiency, an absence of social media exposure might imply the B2B company isn't effectively connecting with their online audience; a minimum of those that assist share and distribute content through social channels.Social media auditing is a place that Buzzsumo shines brightest.While the free version of the platform supplies restricted access to an organization's social networks presence, the business license goes far more extensive, assessing content performance separately and site-wide. For B2B online marketers on a more restrictive budget plan, SharedCount provides comparable social sharing metrics, sans Twitter due to
their API change a while back.At the other end the (rates)spectrum, The SimilarWeb Social Traffic report deals presence into the direction of social networks activity and effectiveness of content marketing performance.Don't forget to assess platform-specific analytics and Google Analytics social networks referral reporting too. All of these possessions can ultimately be crucial when benchmarking social networks performance, beyond a preliminary audit.The Google Analytics Audit Finally, B2B online marketers should constantly carry out a fast audit and evaluation of their Google Analytics(or similar reporting tool)combination to guarantee whatever is tracking correctly and appropriately.Screaming Frog provides a< a href=http://www.seerinteractive.com/blog/screaming-frog-to-verify-google-analytics/ > easy approach for confirming that Google Analytics tracking code (or almost any tracking code )exists on all appropriate web pages.Are you tracking as many conversion points as possible? Reassess essential landing pages to guarantee conversion-oriented actions are determined in Google Analytics, either through goal tracking, event tracking, or both. What about your natural search performance in basic? Once again, Shouting Frog supplies a technique for combining web address information, Google Analytics traffic information, and Google Browse Console data, in one crawl report. This is a quite powerful audit for developing top-level direction in SEO activities moving forward.And last but not least, it's always advised to examine standard organic search recommendation reporting, across web pages and site areas, to recognize chances and areas for improvements.While auditing Google Analytics or another relevant third celebration measurement application might only indirectly benefit SEO program operate in the short-term, these resources are important for showing performance in the long-run. Additionally, these type of audits may reveal instant opportunities to either show success or identify "low-hanging fruit "to prioritize in tactical execution.Final Ideas The typical thread in between
these SEO audits is that they all supply instructions towards next steps in program development work; both tactically and particular to tactical application.In regards to audit priority, my suggestion is to consider the areas of SEO(copy, links, keywords, etc) your organization feels the most crucial to resolve initially. It can be easy to obtain overwhelmed carrying out all these simultaneously and end up being overwhelmed with information.From there, it's best to determine what work can be done in-house
and where more customized program development has to be considered.What kinds of SEO auditing have been crucial for your organization? I would love to check out thoughts and viewpoint via remarks below.
Source
http://customerthink.com/5-types-of-seo-audits-for-b2b-organizations/
0 notes
Text
SEO Website Backlinks With Google Advance Browse.
Basic Backlink Structure - 9 Ways To Grow Your Rank. Link building can be stressful, but's easier if you concentrate on producing link-worthy and sharable content. Resolving this concern can be as simple as a link to your homepage, or as comprehensive as including your core navigation bar. You don't need to employ one when starting of course since you must be able to run your social media platforms on your own with no problem, but in the long-lasting, you'll most likely turn to one too. Eventually, thanks for sharing these valuable ideas with us. Terrific content also draws in others whose websites have more ranking authority than yours, which's where the magic takes place. backlinks indexer At Yoast, we utilize the link profiles offered by a lot. Gaining attention from journalism and getting published in industry publications can assist you construct your brand name, increase your noticeable proficiency, enhance your credibility, and, naturally, get backlinks from authoritative sources. Citation Circulation is Majestic's evaluation of link equity (essentially PageRank ). When indexing and ranking a website, browse engines will think about the quality of backlinks. Then you require to research them even more to figure out which ones will help most in your link structure efforts once you have recognized some websites. Step 2: Click Browse Traffic then Links to your Site from the left menu. Monitoring your backlinks on a regular basis will assist you stay on top of your site's traffic development. In SEO terms, quality backlinks provide a system for Google to trust a site, construct authority and reliability. Start your SEO and linkbuilding by repairing mistakes on your site. Everyday bring is the coolest and best equipment people enjoy; the cannot- leave-home-without-it equipment. There are a number of aspects that enter into what does it cost? worth and effect a backlink might have for you. These are the ones that provide you with an opportunity to improve your own link appeal. If you browse Google's Webmaster Academy, you will not find references to SEO equity, link equity, or link juice anywhere. Newsjacking is when you capitalize on the popularity of a news story to magnify your very own sales and marketing success. Getting backlinks - links from other sites to your company's site - is the most effective way to enhance your rankings. The confusion for many people starts when they start investigating methods of getting their material seen by the masses and ranked in the search engines. In my experience, the IDEA is 10x more vital than the design itself.
Links utilizing appropriate anchor text pass more keyword focused worth. Years back, site creators might litter their pages with particular keywords in order to rank higher on the search engine. You also have to ensure that you're pursuing a subject that is presently trending. You thought it then you can sneak a link in there. You can generate a detailed link analysis for the following specifications: index/ noindex, follow/ nofollow, cached/ not cached, date found, Moz Domain Authority, anchor text, anchor text, Alexa rank, Ahrefs rank, social popularity and more. In diesem Zusammenhang ist es auch immer von Vorteil, wenn der gesetzte Backlink pass away entsprechenden Keywords der verlinkten Seite enthält - somit kann die Wirkung innerhalb der Suchmaschine noch weiter verstärkt werden. They heavily influence a websites's capability to rank for a target keyword. Therefore, do not fret about the look or the age of the potential link. They do that by combining and re-crawling link information from approximately 25 different link data sources. You want to be very careful in the method you call them as well because clearly requesting for a link (and even an evaluation) is an infraction of Google's Webmaster Standards. That page's worth of 100 is divided between the two links, so that each communicates a worth of 50. The other page left wing has 3 outgoing links, each bring one-third of the page's worth of 9. One link goes to the top page on the right, which ends up with a total value of 53. The bottom ideal page has no other backlinks, so its overall value is 50. Sharing Google plus posts helps, but anyone with a spammy site can fill G+, FB and Twitter with sound. These can work short-term, however websites using these are most likely to be captured by online search engine and have their rankings decimated. This step is generally due to human error, however the good news for you is that it can be a brand-new source of backlinks. Stone Temple Consulting recently performed a research study to study the connection between external links and rankings utilizing information from the MOZ API. Thanks, Just after asking the concern I check out the post you reference. If that does not work, report the website to Google's Webspam team While this won't help you eliminate the link, it can help keep that page off the leading Google outcomes. Usually this might not deserve Udemy's time, however since there are a lot of incoming widget links, it would behoove them to nip this prospective problem in the bud. I was simply browsing for a guide on link building, and this is something interesting. Definition of SEO: 'SEO' or Seo is the act of showing Google and other significant online search engine that your website and online existence is the most popular and appropriate response to raise page one, front and center for a search query. In-content backlinks (i.e. backlinks within a block of text) are the most likely to be editorial links, which are precisely the type of links that search engines value most, and if a link is consisted of early on in a piece of material then it's natural to presume that the author of that content considers it be an important link. I was aiming difficult to obtain the post which can inform me just what a backlink is and what are its advantages in today's web world. I indicate, there are business out there that usage guest publishing to leveraging the domain authority of large websites to quickly rank for competitive search terms that own ultra targeted recommendation traffic to their websites. Today backlinks still have a place in the world of seo, after all as we have actually already stated, links are the very fabric of the internet. The HubSpot and Forbes outcomes are actually short articles by Neil Patel; the others mention him in pieces about visitor posting. Most of these directory sites use spin and duplicate material. Os backlinks podem ser internos, ligando suas próprias páginas com conteúdos relevantes para aumentar tempo de permanência do usuário no blog site, ou externa, trazendo leitor de uma página de outro site ou blog. If you take an aggressive approach to building links, it's highly likely that your website will be punished by among Google's Penguin updates, which particularly target unnatural linking practices. This offers you a clear concept of what's working, exactly what's not and how you can change your approach in the future. In truth, I believe Google will ultimately stop cheapening backlinks and rather begin enforcing Penguin-like charges or manual actions on websites again. Or you can just tease me. It's my task in this industry to show people how simple not unfaithful is and it's a lot more so my job to make sure individuals are at least educated enough to know exactly what counts as unfaithful. See at a look from which link sources your competitors benefit many.
0 notes
Text
Google Shares Details About the Technology Behind Googlebot
Posted by goralewicz
Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.
The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there's a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it's extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.
Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:
Interestingly, this tweet didn’t get nearly as much attention as I would expect.
So what does Ilya’s revelation in this tweet mean for SEOs?
Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.
You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.
Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.
It will be updated sometime in the future
Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:
I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!
Google uses Chrome 41 for rendering. What does that mean?
We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?
Not so fast. Here is what Ilya Grigorik had to say in response to this question:
We now know WRS' capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.
Don’t get your hopes up
All that said, there are a few reasons to keep your excitement at bay.
Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://ift.tt/2yqUsXV using Chrome 41, and then open it in any up-to-date browser you are using.
The page in Chrome 41 looks like this:
The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you're seeing above can be solved if diagnosed properly. Let me quote Ilya:
"If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it."
I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.
The Fetch and Render tool is the Chrome v. 41 preview
There's another interesting thing about Chrome 41. Google Search Console's Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page") is generated by the Google Search Console bot, which is... Chrome 41.0.2272.118 (see screenshot below).
Zoom in here
There's evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we're not able to point them out.
Chrome 41 vs Fetch as Google: A word of caution
Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there's a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.
Chrome 41 preview:
Image blurred for privacy
The above page has quite a lot of content and images, but it looks completely different in Google Search Console.
Google Search Console preview for the same URL:
As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.
From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.
This leads us to a question...
What features are supported by Googlebot and WRS?
According to the Rendering on Google Search guide:
Googlebot doesn't support IndexedDB, WebSQL, and WebGL.
HTTP cookies and local storage, as well as session storage, are cleared between page loads.
All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
Google can’t index 3D and VR content.
Googlebot only supports HTTP/1.1 crawling.
The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.
No HTTP/2 support (still)
We've mostly been covering how Googlebot uses Chrome, but there's another recent discovery to keep in mind.
There is still no support for HTTP/2 for Googlebot.
Since it's now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.
There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.
Via http://ift.tt/2yr27W1...
Googlebot’s future
Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It's hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.
This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.
Chrome 41 vs. Googlebot’s crawling efficiency
Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it's crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”
Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.
In summary
Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we're testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.
It's safe to assume that Chrome 41 will now be a part of every SEO’s toolset.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
http://ift.tt/2xI749n
0 notes
Text
Google Shares Details About the Technology Behind Googlebot
Posted by goralewicz
Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.
The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there's a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it's extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.
Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:
Interestingly, this tweet didn’t get nearly as much attention as I would expect.
So what does Ilya’s revelation in this tweet mean for SEOs?
Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.
You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.
Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.
It will be updated sometime in the future
Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:
I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!
Google uses Chrome 41 for rendering. What does that mean?
We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?
Not so fast. Here is what Ilya Grigorik had to say in response to this question:
We now know WRS' capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.
Don’t get your hopes up
All that said, there are a few reasons to keep your excitement at bay.
Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://ift.tt/2yqUsXV using Chrome 41, and then open it in any up-to-date browser you are using.
The page in Chrome 41 looks like this:
The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you're seeing above can be solved if diagnosed properly. Let me quote Ilya:
"If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it."
I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.
The Fetch and Render tool is the Chrome v. 41 preview
There's another interesting thing about Chrome 41. Google Search Console's Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page") is generated by the Google Search Console bot, which is... Chrome 41.0.2272.118 (see screenshot below).
Zoom in here
There's evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we're not able to point them out.
Chrome 41 vs Fetch as Google: A word of caution
Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there's a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.
Chrome 41 preview:
Image blurred for privacy
The above page has quite a lot of content and images, but it looks completely different in Google Search Console.
Google Search Console preview for the same URL:
As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.
From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.
This leads us to a question...
What features are supported by Googlebot and WRS?
According to the Rendering on Google Search guide:
Googlebot doesn't support IndexedDB, WebSQL, and WebGL.
HTTP cookies and local storage, as well as session storage, are cleared between page loads.
All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
Google can’t index 3D and VR content.
Googlebot only supports HTTP/1.1 crawling.
The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.
No HTTP/2 support (still)
We've mostly been covering how Googlebot uses Chrome, but there's another recent discovery to keep in mind.
There is still no support for HTTP/2 for Googlebot.
Since it's now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.
There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.
Via http://ift.tt/2yr27W1...
Googlebot’s future
Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It's hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.
This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.
Chrome 41 vs. Googlebot’s crawling efficiency
Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it's crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”
Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.
In summary
Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we're testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.
It's safe to assume that Chrome 41 will now be a part of every SEO’s toolset.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
http://ift.tt/2xI749n
0 notes
Text
Google Shares Details About the Technology Behind Googlebot
Posted by goralewicz
Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.
The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there's a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it's extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.
Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:
Interestingly, this tweet didn’t get nearly as much attention as I would expect.
So what does Ilya’s revelation in this tweet mean for SEOs?
Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.
You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.
Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.
It will be updated sometime in the future
Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:
I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!
Google uses Chrome 41 for rendering. What does that mean?
We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?
Not so fast. Here is what Ilya Grigorik had to say in response to this question:
We now know WRS' capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.
Don’t get your hopes up
All that said, there are a few reasons to keep your excitement at bay.
Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://ift.tt/2yqUsXV using Chrome 41, and then open it in any up-to-date browser you are using.
The page in Chrome 41 looks like this:
The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you're seeing above can be solved if diagnosed properly. Let me quote Ilya:
"If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it."
I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.
The Fetch and Render tool is the Chrome v. 41 preview
There's another interesting thing about Chrome 41. Google Search Console's Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page") is generated by the Google Search Console bot, which is... Chrome 41.0.2272.118 (see screenshot below).
Zoom in here
There's evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we're not able to point them out.
Chrome 41 vs Fetch as Google: A word of caution
Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there's a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.
Chrome 41 preview:
Image blurred for privacy
The above page has quite a lot of content and images, but it looks completely different in Google Search Console.
Google Search Console preview for the same URL:
As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.
From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.
This leads us to a question...
What features are supported by Googlebot and WRS?
According to the Rendering on Google Search guide:
Googlebot doesn't support IndexedDB, WebSQL, and WebGL.
HTTP cookies and local storage, as well as session storage, are cleared between page loads.
All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
Google can’t index 3D and VR content.
Googlebot only supports HTTP/1.1 crawling.
The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.
No HTTP/2 support (still)
We've mostly been covering how Googlebot uses Chrome, but there's another recent discovery to keep in mind.
There is still no support for HTTP/2 for Googlebot.
Since it's now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.
There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.
Via http://ift.tt/2yr27W1...
Googlebot’s future
Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It's hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.
This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.
Chrome 41 vs. Googlebot’s crawling efficiency
Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it's crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”
Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.
In summary
Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we're testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.
It's safe to assume that Chrome 41 will now be a part of every SEO’s toolset.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
http://ift.tt/2xI749n
0 notes
Text
2018-04-04 06 SEO now
SEO
Ahrefs Blog
How many websites are mining cryptocurrency? We analyzed 175M+ domains to find out.
9 Actionable Ways To Find Anyone’s Email Address [Updated for 2018]
White Hat SEO: How to Play by the Rules and Win
How to Find and Fix Keyword Cannibalization Issues (in Seconds)
How to Use Google Trends for Keyword Research: 7 Effective Ways
ClickZ
Real-time advisor brands: Mobile customers usher in a new age of AI assistance
Augmented reality, outer space and emerging technology: How USA TODAY is revolutionizing journalism
Helping you choose the right SEO tool
Google, Amazon and the relationship between paid search and ecommerce
Crypto marketing: What does the recent Google/Twitter Ad Ban actually mean
Local SEO guide
Hacked Site Manual Action Report from Google? Try Bing To Find It
Peace Out To You, Sugar Rae!
Who Is Next In Local Listings Management?
A NodeJS Script for Accessing the Google Search Console API
Why SEOs Will Always Have Jobs…
Moz
The Guide to Local Sponsorship Marketing - The 2018 Edition
How to Target Featured Snippet Opportunities - Whiteboard Friday
MozCon 2018: The Initial Agenda
Just How Much is Your Website Worth, Anyhow? An Easy Guide to Valuation
Presenting Your Findings: How to Create Relevant and Engaging SEO Reports - Next Level
Reddit SEO
Does anyone know anything about the reputation of a company called Main Street Host?
What besides backlinking can someone do without having access to a website's content?
redoing website, but want to outsource seo
Has anyone tried a YEXT paid subscription?
Why do I have a horizontal scroll bar in the SEO subreddit? :(
SEO Book Blog
Left is Right & Up is Down
Grist for the Machine
Virtual Real Estate
Rank Checker Update
DMOZ Shut Down
SEO by the Sea
Related Questions are Joined by ‘People Also Search For’ Refinements; Now Using a Question Graph
Google’s Mobile Location History
Does Google Use Latent Semantic Indexing?
Google Targeted Advertising, Part 1
Google Giving Less Weight to Reviews of Places You Stop Visiting?
Search Engine Journal
Google’s New Shopping Actions Program: What You Need to Know by @ebkendo
3 Best Practices for Dynamic Search Ads
SEO Team Structure: How to Fit SEO into Your Company by @jennyhalasz
Google Confirms that Structured Data Improves Targeting by @martinibuster
Preventing SEO Disasters: Practicing Safe SEO by @therealseoradar
Search Engine Land
SearchCap: Google’s AMP Project, John Harrison Google doodle & more
Learn the benefits of pay-per-call for affiliate marketing
Google’s AMP Project announces new consent component ahead of GDPR compliance deadline
How to use the Knowledge Graph for higher rankings
John Harrison Google doodle honors man who invented marine chronometer 283 years ago
Search Engine Roundtable
Daily Search Forum Recap: April 3, 2018
Google: Index Status & Index Coverage Reports Are Compiled Slightly Differently
Google: Structured Data Doesn't Give You A Ranking Boost But Can Help Rankings
Google Rickrolls SEOs With Recrawl Now Button
Ben Gomes To Take Over Search At Google, Giannandrea Steps Down
Search Engine Watch
Hyperlocal SEO: What is it, and how can you make sure you do it right?
AI and machine learning: What you do and don’t need to know for SEO
How to migrate your WordPress website domain name
A guide to the standard reports in Google Analytics: Audience reports
12 tips to improve your reputation strategy
Searchmetrics Blog
Unwrapping the Secrets of SEO: Google Mobile-First Rollout Creates Opportunity to Profit or Perish
Memo to Modern Marketer: Seven Tips to Successfully Tag-Teaming Your Search and Content Strategy
Universal Search 2018: 5 Facts for Online Marketers
Google Core Algorithm Update: The Phantom Comes Out of the Shadows
NCAA March Madness: Painting the Court With Numbers
Yoast
Yoast SEO 7.2: Fixes and improvements
Is Fact Check for Yoast SEO a joke?
Get our new free plugin: Fact Check for Yoast SEO 1.4 Beta
Ask Yoast: Bad slug for Arabic URLs
How to use Yoast SEO to write an awesome meta description
0 notes
Text
Google Shares Details About the Technology Behind Googlebot
Posted by goralewicz
Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.
The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there's a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it's extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.
Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:
Interestingly, this tweet didn’t get nearly as much attention as I would expect.
So what does Ilya’s revelation in this tweet mean for SEOs?
Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.
You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.
Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.
It will be updated sometime in the future
Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:
I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!
Google uses Chrome 41 for rendering. What does that mean?
We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?
Not so fast. Here is what Ilya Grigorik had to say in response to this question:
We now know WRS' capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.
Don’t get your hopes up
All that said, there are a few reasons to keep your excitement at bay.
Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://ift.tt/2yqUsXV using Chrome 41, and then open it in any up-to-date browser you are using.
The page in Chrome 41 looks like this:
The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you're seeing above can be solved if diagnosed properly. Let me quote Ilya:
"If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it."
I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.
The Fetch and Render tool is the Chrome v. 41 preview
There's another interesting thing about Chrome 41. Google Search Console's Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page") is generated by the Google Search Console bot, which is... Chrome 41.0.2272.118 (see screenshot below).
Zoom in here
There's evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we're not able to point them out.
Chrome 41 vs Fetch as Google: A word of caution
Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there's a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.
Chrome 41 preview:
Image blurred for privacy
The above page has quite a lot of content and images, but it looks completely different in Google Search Console.
Google Search Console preview for the same URL:
As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.
From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.
This leads us to a question...
What features are supported by Googlebot and WRS?
According to the Rendering on Google Search guide:
Googlebot doesn't support IndexedDB, WebSQL, and WebGL.
HTTP cookies and local storage, as well as session storage, are cleared between page loads.
All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
Google can’t index 3D and VR content.
Googlebot only supports HTTP/1.1 crawling.
The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.
No HTTP/2 support (still)
We've mostly been covering how Googlebot uses Chrome, but there's another recent discovery to keep in mind.
There is still no support for HTTP/2 for Googlebot.
Since it's now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.
There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.
Via http://ift.tt/2yr27W1...
Googlebot’s future
Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It's hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.
This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.
Chrome 41 vs. Googlebot’s crawling efficiency
Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it's crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”
Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.
In summary
Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we're testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.
It's safe to assume that Chrome 41 will now be a part of every SEO’s toolset.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
http://ift.tt/2xI749n
0 notes