#EmberJS Solutions
Explore tagged Tumblr posts
Text
Top Benefits of Hiring Dedicated Ember.js Developers
Ember. js stays popular as one of the most widely used frameworks for creating complex web applications and remains a preferred choice because of its well-structured structure and reliable characteristics. Despite the fields identified earlier as appropriate for development using Ember, it is recommended that before starting the next web project with Ember.js, one should contain the following aspects. Choosing dedicate Ember.js developers can provide a substantial increase in result. Namely the following can be outlined as advantages of cooperating with dedicated Ember. js developers and explain how they can skyrocket your project.
1. Expertise and Specialization
Dedicated Ember. js developers acquire some specific skills and experience in using the framework to the maximum effect possible. They should be conversant with Ember. js standards, coding practices, and effective ways of programming. This knowledge allows them to design solutions that can grow as necessary; that can tune performance and be as close to Ember. js principles that will make your application solid as well as maintainable.
2. Focus and Commitment
Unlike, Emberjs developer on contract and Remote ember.js developer working on several projects at the same time,Dedicated Ember. js developers are employed exclusively on your project, so their work is made up of your project alone. It complements their hard work and dedication with steady improvement, precise work, as well as the early identification of potential issues. This dedicated approach enables one to meet all the set deadlines effectively and for your project not to be swayed off course by other tasks.
3. Seamless Integration and Collaboration
The Remote ember. js developer easily merge with your team and or project setting. They are on board with your company objectives, comprehend your project needs, and integrate into your ways of working and sharing information. It encourages the process of collaboration as well as brings in transparency for all the project stakeholders and amongst the team members, which leads to a synchronized process of development.
4. Long-term Support and Continuity
Establishing a long-term partnership with dedicated Ember.js developers guarantees your project's stability and continuous support. They get to know the architecture of your program inside and out, comprehend how it changes over time, and are in a good position to offer timely updates and preventive maintenanc e. This continuity reduces disturbances, enhances stability, and enables quick reactions to new requirements or difficulties.
5. Cost Efficiency and Value
While the initial investment in dedicated developers may seem higher, it offers long-term cost efficiency and value. You eliminate recruitment costs, overhead expenses associated with onboarding, and the risks of turnover. Moreover, dedicated developers often streamline development processes, leading to faster time-to-market and improved return on investment (ROI) for your application.
How to Hire Dedicated Ember.js Developers
To ensure a successful hiring process for dedicated Ember.js developers, consider the following steps:
Define Your Requirements: Be very specific about your project objective, specifications and the skills that you need in Ember. js development.
Search Strategically: There is dedicated job boards and professional networking sites aimed at identifying people with Ember. js work experience. js.
Evaluate Skills and Experience: Review candidates’ portfolios, conduct technical assessments, and schedule interviews to assess their proficiency and compatibility with your project.
Cultural Fit: Determine a person's level of ability in communication and teamwork as well as their cultural fit for the team.
Trial Period: Trial period or short term contract can be effective if there is a necessity for a long term cooperation in order in order to assess the effectiveness of a particular service as well as a compatibility of both, parties.
Conclusion
Hiring dedicated Ember.js developer can be a great benefit when launching high-quality, highly scalable web applications that correlate with business goals. They focus on the particular service they offer, they are dedicated to your project and its success, and blend into your group’s structure to guarantee effective development cycles and future support. In the course of introducing a new application or improving the existing one, it is profitable to allocate separate Ember. Js developers can help you advance so fast in your developments and get the best services ever.
If you are ready to take your Web Development to the next level with dedicated Ember. js developers, start your search today and get the most out of this extremely powerful tool.
Remember, the right Ember. javascript developers are capable of translating your ideas into a tangible outcome and with their experience in the field, come up with unique web solutions that will meet your demands while at the same time establishment sustainable solutions.
0 notes
Text
EmberJs Web App Development Services
Pattem Digital offers top-notch Ember.js web app development services that are designed to create dynamic, interactive, and user-friendly online applications. Using the power of the Ember.js JavaScript framework, our talented team of engineers crafts solutions that are scalable and dependable, tailored to your specific business needs. Depending on your needs, we could offer you a complex web platform or a straightforward one-page application.
0 notes
Text
List of Top JavaScript Frameworks 2020 For Front End Development
JavaScript is defining the future of the tech world with its wide genre of competent frameworks, which are capable of accelerating the development of applications in many ways. How to choose a certain framework in JavaScript is based on the company’s goals, project requirements and how certain frameworks can be used in different scenarios.
JavaScript, one of the most surreal and beautiful programming languages, enwrapped by the intriguing fact that even though it is named “Java” Script, it doesn’t have any association with Java. In mid-1995, when JavaScript came into existence it was disguised as the name Mocha and later it was named as “LiveScript” and when Netscape and Sun did a license agreement it was finally renamed to “JavaScript”. Nobody had even imagined in its initial form that it would be such a revolutionary programming language and would be entitled as one of the major languages to learn. The world is pacing fast with the ever-changing technology and programming languages being the pillar of technology. JavaScript is surely defining the future with its compelling and competent frameworks.
What are the Frameworks?
In general, a framework is a prototype or conceptual structure intended to serve as a support or guide for the building of something that expands the structure into something useful. According to Wikipedia ” A software framework is an abstraction in which software providing generic functionality can be selectively changed by additional user-written code, thus providing application-specific software.”
JavaScript web frameworks are cheat codes for quick web application development. They serve as a skeletal frame for single page application development and allows developers to worry less about code structure or maintenance and help them in focusing more on the creation of complex interface elements and expand opportunities of JS and plain HTML.
So, Which frameworks of JavaScript are most popular and why?
Below is the list of a few Javascript frameworks:
1. AngularJS- AngularJS is an open-source framework used in frontend and is maintained by Google. It is mainly used to subside and sort the problems encountered while creating single-page applications usually have, as it simplifies both development and testing of such application by providing a framework for client-side model view controller (MVC) and model view-view model (MVVM) architectures. For now, it is known to be the most used JavaScript framework for single-page applications and have the largest community of developers.
2. ReactJS- ReactJS camouflage itself to be JS framework but it is more of an open-source JS library, which has huge names like Facebook and Instagram behind it. React was built by a software engineer at Facebook. In 2011, it was first deployed on Facebook’s newsfeed and later on Instagram in 2012. React Js emerges in an ecosystem of complete frameworks, but it’s just the view. In MVC(Mobile-View-Controller) pattern, React.js acts as “V” and can be smoothly integrated with any architecture.
A fully functional dynamic application can’t be built with React alone. Recently, On April 18th 2017, Facebook announced a new core algorithm of React framework library for building user interface named React Fiber, which is said to be the foundation for future development and improvement of ReactJS.
3. Ember JS- A few years back, in 2015, EmberJs was considered to be the best framework leaving React and AngularJS behind. It does two-way data binding as same as AngularJS keeping both view and model in sync for all the time. Emberjs is commonly used for complex feature-rich web applications and website. LinkedIn, Netflix, Chipotle, Blue Apron, Nordstrom are one of the few of the top names that have incorporated EmberJS.
What works in its favour is that it is easy to learn and have many tutorials online which helps to learn it with ease.
4. Vue.Js- Vue.js is considered to be one of the best solutions for cross-platform development. The development of Vue.Js is considered by taking the best qualities from Angular, React and Ember such as Vue.js offers two-way data binding (same as in AngularJS), server-side rendering (like in Angular2 and ReactJS), Vue-CLI (backbone tool for quick start) and optional JSX support. and all the altering flaws in prior three.
5. MeteorJS- MeteorJS is a free and open-source framework, which is well-equipped with tons of features for back-end development, front-end rendering, DB(database) management and business logic. Being a full-stack platform, it has the quality that its name suggests which is being fast. If you’re looking to rapidly develop smaller, reactive applications on the Node.js platform, Meteor is definitely an excellent choice.
Due to its modular structure, all the packages and libraries can be used at a high pace. In terms of performance, all the changes in the database are transmitted to the UI in the shortest time possible and in conversely with no evident time losses caused by different languages or server response time.
The consideration of a JavaScript framework is based on the company’s goals, project requirements and how certain frameworks can be used in certain scenarios. JavaScript is portraying the whole new depiction of the future of technology as it backs the prompt development and prototyping.
These frameworks and libraries have already reshaped the way how JS collaborates with HTML and CSS to compile views both in browsers and now even on native platforms.
#java script framework#javascript framework development#javascript development 2020#website development company
2 notes
·
View notes
Link
We are one of the best Emberjs development company leveraging an agile methodology to deliver our solutions based on Ember.js. Our Emberjs developments services provide Emberjs based applications that are highly functional and feature-packed. We provide one of the best Emberjs outsourcing services with the highest quality irrespective of the size of the project.
#EmberjsDevelopmentServices#EmberJSDevelopmentCompanies Emberjsoutsourcingcompany#Emberjsoutsourcingservices
1 note
·
View note
Link
Want your web applications to leave a lasting first impression? Leverage EmberJS services by Helios Solutions to keep pace with your customer expectations.
0 notes
Link

OnGraph provides Ember.JS expert solutions that meet the client's unique business requirements. We are known in the industry for delivering best-in-class solutions. we have a team of professional Web & Mobile app developers, who have rich industry experience in developing applications using Ember.JS 2.16 or 2.17 versions.
#EmberjsDevelopmentServices#BestEmberJSDevelopmentCompany#EmberJSDevelopmentCompanies#Emberjsoutsourcingservices
3 notes
·
View notes
Text
What is UI & UX Design?

User Interface Development is defined as the development of websites, web applications, mobile applications, and software. The User Interface plays a key role in the software development life cycle [SDLC]. Most people assume UI development is all about creating websites and writing HTML, CSS, and JavaScript, but user interface goes far beyond these technical terms. The goal of the user interface is to make the user’s interaction as simple and efficient as possible, in terms of accomplishing user goals.
Think about it this way: The user experiences only front end interactions, such as the look and feel of the website/application. More often than not, they don’t concern themselves with the back end – like app design, coding elements, or methodologies employed in content layout. What’s more, users need to feel engaged and at ease when they visit your website. That’s where UI engineers come into the picture – to fulfill this task.
Cultivating a User Interface can be divided into two phases in website/application/software development:
Research + Design
Development
Research and Design:
Research and analysis are all about interviewing users & project stakeholders and gathering their input to create a requirements document that includes personas, user scenarios, user behavior, and user experience evaluation metrics. During this phase, it is also important to understand the target audience so as to better cultivate a user experience design.
Business analysts and a user experience team usually lead the research phase. Both teams collect all information and inputs from users and project stakeholders in order to discuss technical terms with developers and project managers. Lastly, they prepare final documentation.
With the help of documentation, UX teams start the design process. They first create the wireframes to bring a rough idea to the project stakeholders and users.
Wireframes are presented as a comprehensive screen layout consisting of black and white sketches of every screen in the application. At this point, the visual and graphic design processes dictating the visual appeal have not yet begun.
Wireframe Example:
Next, developers must focus on creating prototypes that will simulate the real application. A prototype can contain one or more features, but it actually does nothing. It merely simulates the behavior of a real application, and users can see color combinations and minimal functionality in real-time. Wireframes/Sketches and Prototypes are done by UX designers.
Tools to create Wireframes and Prototypes
Balsamiq Mockups
Axure
Gliffy
iPhone mockup
InDesign
Photoshop
Fireworks
Dreamweaver
UX Designer Role and Responsibilities:
Strong conceptualization ability, strong visual communication ability, drawing skills, and sketchbook technique.
Strong working knowledge of Photoshop, Illustrator, InDesign, Fireworks and associated design tools.
Strong working knowledge of HTML, CSS, JavaScript/JQuery.
Experience with user interface design patterns and standard UCD methodologies.
Excellent verbal and written communication skills, especially the ability to clearly articulate design decisions with stakeholders and development teams.
Understanding of common software development practices.
Solid understanding of user-centered design principles, careful attention to detail, and ability to grasp complex, nuanced product requirements.
Collaborating on user experience planning and researching interaction design trends.
Researching technology trends.
Note: Responsibilities would be based on company and project requirements.
UI Development
UI development services can be considered as the middle groundwork by combining both design sensibilities and technicalities together. UI developers are skilled at making a smooth appearance and proper functionality in a browser/device at the same time. They have the production skills vital to communicate with backend developers and collect data from server/backend and display to the user. They are fully responsible for the client-side/front end logics including web design and functionalities.
UI Developer and Role and Responsibilities:
Responsible for building Web Applications using the Single Page Application (SPA) paradigm.
Develop software solutions using industry best practices and in the area of security and performance in a web and SOA architecture environment.
Effectively develop in a clean, well structured, easily maintainable format.
Participate in the full SDLC with Requirements, Solution Design, Development, QA Implementation, and product support using Scrum and other agile methodologies.
World-class HTML5/CSS3 and especially JavaScript/jQuery skills and good knowledge on other major JavaScript libraries and frameworks.
Skilled in using a CSS preprocessor to speed up development (LESS, SCSS).
Detailed knowledge of cross-browser UI issues and hacks.
Social technology API experience (Primarily Facebook, and also Twitter)
Experience creating, as well as consuming, JSON-based APIs.
Understand executing accessibility and progressive enhancement presentation.
Ensure design consistency with the client’s development standards and guidelines.
Note: Responsibilities would be based on company and project requirements.
A few examples of UI Developer technologies:
HTML5 and CSS3
Bootstrap, Boilerplate
JavaScript (OPP)
jQuery and jQuery Mobile
Json
Ajax
BackboneJS
Underscore
AngularJS
EmberJS
KnockoutJS
RequirJS
CanJS
ExtJS
Dojo
YUI
Grunt
Bower
Yeoman
MongoDB
NodeJS
MySQL
#User Interface Development#software development life cycle [SDLC]#User Interface#UI engineers#user experience evaluation#user experience team#UX teams
0 notes
Photo

Are you looking to #Hire #Remote #Developers from #India?
Surekha Technologies Pvt Ltd is an #offshore #development and #consulting company from #Ahmedabad, India.
We can help you in filling the gap. We have a team of talented IT specialists. We have profiles between 2 to 12 years of #experienced developers. If You need Full-Stack, Front-End, Back-End or Mobile App Developers contact us anytime.
If you are looking to build #Longterm #Relationship with us, contact us now or send details as per your requirement.
📧 [email protected] 📞 +1 408 914 2737 | +91 79 40050848 🌐 https://www.surekhatech.com
Our Services : ➡️ Liferay Development ➡️ Odoo Development ➡️ ERP Solution ➡️ Mobile App Development (Native and Hybrid App) ➡️ eCommerce Development ➡️ Java Development ➡️ JavaScript App Development (ReactJs, AngularJs, and EmberJs) ➡️ QA Services - Manual & Automated (Mobile and Desktop)
0 notes
Text
Get To The Importance Of Angular JS Web Application Services Offered By A Good Company

What do you think AngularJS really is? It is a framework that is famous for its flexibility, scalability and contains the easiest MVC implementation, and one of the best open-source frameworks for JavaScript, which is provided by Google. Angular JS is used for developing and building web-based and mobile-based applications for all types of platforms. This enables all businesses to meet their requirements, with the help of the applications they happen to use. The AngularJS Web Application Development Company is one of the leading development company in the industry, and is packed with high-quality functional features that are sustainable and secure. There are many reasons, to choose AngularJS as the ultimate application development, and this Development Company has a team of skilled developers, who will use their knowledge and help in fulfilling any type of business needs by creating unique pieces of applications, which will exceed your expectation. Some of the features of AngularJS are filters, form validation, controllers and routers, lightweight code base, two-way data binding, HTML template, and testing. When customers look for a good company that can provide a top-notch AngularJS Web Application solution, then Moon Technolabs wins the bet. The company has never failed to provide its customers, with solution and services that are a perfect match to their businesses. The company and its developers have worked hard to reach the position and respect, which they are currently holding on to. They have provided services for the Angular JS solution in the areas of App Design & Development, One Page Application Development, QA & Testing, App Security & Performance, and many other areas. The company understands the use of the latest technologies, and how much of an importance it holds for its customers to have an increase in their client base. The company and its teams of experts carry a good amount of knowledge on JS technologies, and this Development Company works with technologies like MeteorJS, KnockoutJS, EmberJS, MochaJS, etc. For the past 10 years, the company has completed over 600 projects, has 300 satisfied customers, and an elite team of over 70 staff members, each of them designated to a different department. Their services and solutions are popular around many countries and currently ranked in the No.1 position to be the most trusted, reliable, and superb company for AngularJS framework, and other type of services.
0 notes
Text
JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience
Posted by alexis-sanders
Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.
The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?
Fundamentals
What is JavaScript?
When creating a modern web page, there are three major components:
HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
JavaScript – JavaScript is the interactivity and a core component of the dynamic web.
Learn more about webpage development and how to code basic JavaScript.
Image sources: 1, 2, 3
JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.
JavaScript libraries and frameworks:
What is AJAX?
AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess "AJAJ" doesn’t sound as clean as "AJAX" [pun intended]).
A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.
One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).
Image source
What is the Document Object Model (DOM)?
As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.
The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.
The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.
The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.
Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.
Simple example with a <title> tag that is populated through JavaScript:
HTML source
DOM
What is headless browsing?
Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.
PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).
Why can JavaScript be challenging for SEO? (and how to fix issues)
There are three (3) primary reasons to be concerned about JavaScript on your site:
Crawlability: Bots’ ability to crawl your site.
Obtainability: Bots’ ability to access information and parse your content.
Perceived site latency: AKA the Critical Rendering Path.
Crawlability
Are bots able to find URLs and understand your site’s architecture? There are two important elements here:
Blocking search engines from your JavaScript (even accidentally).
Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.
Why is blocking JavaScript such a big deal?
If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).
Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.
The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.
!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.
Internal linking
Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.
Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.
Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.
URL structure
Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.
Not recommended:
The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of "#" in URLs.
Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name="fragment" content="!">).
Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image source
Recommended:
pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
Read more: Mozilla PushState History API Documents
Obtainability
Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).
At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.
Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:
If the user must interact for something to fire, search engines probably aren’t seeing it.
Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
*John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
*Screaming Frog tests show a correlation to five seconds to render content.
*The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.
How to make sure Google and other search engines can get your content
1. TEST
The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.
Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.
Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).
All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:
“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”
Consider testing and reviewing through the following:
Confirm that your content is appearing within the DOM.
Test a subset of pages to see if Google can index content.
Manually check quotes from your content.
Fetch with Google and see if content appears.
Fetch with Google supposedly occurs around the load event or before timeout. It's a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.
After you’ve tested all this, what if something's not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.
2. HTML SNAPSHOTS
What are HTmL snapshots?
HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).
Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they're important to understand, because in certain situations they're necessary.
If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.
At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).
Considerations
When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.
A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:
“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.” – Google Developer AJAX Crawling FAQs
Benefits
Despite the considerations, HTML snapshots have powerful advantages:
Knowledge that search engines and crawlers will be able to understand the experience.
Certain types of JavaScript may be harder for Google to grasp (cough... Angular (also colloquially referred to as AngularJS 2) …cough).
Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
Site latency
When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.
The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → "get everything above-the-fold in front of the user, ASAP."
Critical Rendering Path - Optimized Rendering Loads Progressively ASAP:
Image source
However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).
Render-blocking JavaScript – Solutions
If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:
Inline: Add the JavaScript in the HTML document.
Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
Defer: By placing JavaScript lower within the HTML.
!!! Important note: It's important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.
Read more: Google Developer’s Speed Documentation
TL;DR - Moral of the story
Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.
Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
via Blogger http://ift.tt/2sKI4iS
0 notes
Link
Ember JS development is one of the most attractive and cost-effective services offered by OnGraph. Being one of the best Ember JS development companies, we create simple yet sophisticated Ember JS applications with extensive functionalities to deliver an enhanced user experience. If you're looking for top notch Ember JS development services, look no further than Ongraph.
#EmberJSDevelopmentCompanies#Emberjsoutsourcingservices#Emberjsoutsourcingcompany#EmberjsDevelopmentServices
0 notes
Text
JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience
Posted by alexis-sanders
Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.
The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?
Fundamentals
What is JavaScript?
When creating a modern web page, there are three major components:
HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
JavaScript – JavaScript is the interactivity and a core component of the dynamic web.
Learn more about webpage development and how to code basic JavaScript.
Image sources: 1, 2, 3
JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.
JavaScript libraries and frameworks:
What is AJAX?
AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess "AJAJ" doesn’t sound as clean as "AJAX" [pun intended]).
A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.
One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).
Image source
What is the Document Object Model (DOM)?
As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.
The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.
The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.
The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.
Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.
Simple example with a <title> tag that is populated through JavaScript:
HTML source
DOM
What is headless browsing?
Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.
PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).
Why can JavaScript be challenging for SEO? (and how to fix issues)
There are three (3) primary reasons to be concerned about JavaScript on your site:
Crawlability: Bots’ ability to crawl your site.
Obtainability: Bots’ ability to access information and parse your content.
Perceived site latency: AKA the Critical Rendering Path.
Crawlability
Are bots able to find URLs and understand your site’s architecture? There are two important elements here:
Blocking search engines from your JavaScript (even accidentally).
Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.
Why is blocking JavaScript such a big deal?
If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).
Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.
The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.
!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.
Internal linking
Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.
Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.
Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.
URL structure
Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.
Not recommended:
The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of "#" in URLs.
Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name="fragment" content="!">).
Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image source
Recommended:
pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
Read more: Mozilla PushState History API Documents
Obtainability
Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).
At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.
Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:
If the user must interact for something to fire, search engines probably aren’t seeing it.
Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
*John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
*Screaming Frog tests show a correlation to five seconds to render content.
*The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.
How to make sure Google and other search engines can get your content
1. TEST
The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.
Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.
Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).
All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:
“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”
Consider testing and reviewing through the following:
Confirm that your content is appearing within the DOM.
Test a subset of pages to see if Google can index content.
Manually check quotes from your content.
Fetch with Google and see if content appears.
Fetch with Google supposedly occurs around the load event or before timeout. It's a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.
After you’ve tested all this, what if something's not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.
2. HTML SNAPSHOTS
What are HTmL snapshots?
HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).
Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they're important to understand, because in certain situations they're necessary.
If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.
At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).
Considerations
When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.
A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:
“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.” – Google Developer AJAX Crawling FAQs
Benefits
Despite the considerations, HTML snapshots have powerful advantages:
Knowledge that search engines and crawlers will be able to understand the experience.
Certain types of JavaScript may be harder for Google to grasp (cough... Angular (also colloquially referred to as AngularJS 2) …cough).
Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
Site latency
When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.
The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → "get everything above-the-fold in front of the user, ASAP."
Critical Rendering Path - Optimized Rendering Loads Progressively ASAP:
Image source
However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).
Render-blocking JavaScript – Solutions
If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:
Inline: Add the JavaScript in the HTML document.
Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
Defer: By placing JavaScript lower within the HTML.
!!! Important note: It's important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.
Read more: Google Developer’s Speed Documentation
TL;DR - Moral of the story
Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.
Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from http://dentistry01.blogspot.com/2017/06/javascript-seo-making-your-bot.html
0 notes
Link
OnGraph Technologies is highly recommended best EmberJS Development Company India, UK & USA. Our each and every application aims at enhancing the overall user experience. We have built numerous applications and are known in the industry for building interactive, responsive, and cost-effective solutions. OnGraph has over 170+ senior Ember.js developers available for hire. Chat with us now to get started.
5 notes
·
View notes
Text
JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience
Posted by alexis-sanders
Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.
The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?
Fundamentals
What is JavaScript?
When creating a modern web page, there are three major components:
HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
JavaScript – JavaScript is the interactivity and a core component of the dynamic web.
Learn more about webpage development and how to code basic JavaScript.
Image sources: 1, 2, 3
JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.
JavaScript libraries and frameworks:
What is AJAX?
AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess "AJAJ" doesn’t sound as clean as "AJAX" [pun intended]).
A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.
One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).
Image source
What is the Document Object Model (DOM)?
As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.
The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.
The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.
The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.
Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.
Simple example with a <title> tag that is populated through JavaScript:
HTML source
DOM
What is headless browsing?
Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.
PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).
Why can JavaScript be challenging for SEO? (and how to fix issues)
There are three (3) primary reasons to be concerned about JavaScript on your site:
Crawlability: Bots’ ability to crawl your site.
Obtainability: Bots’ ability to access information and parse your content.
Perceived site latency: AKA the Critical Rendering Path.
Crawlability
Are bots able to find URLs and understand your site’s architecture? There are two important elements here:
Blocking search engines from your JavaScript (even accidentally).
Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.
Why is blocking JavaScript such a big deal?
If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).
Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.
The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.
!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.
Internal linking
Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.
Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.
Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.
URL structure
Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.
Not recommended:
The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of "#" in URLs.
Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name="fragment" content="!">).
Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image source
Recommended:
pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
Read more: Mozilla PushState History API Documents
Obtainability
Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).
At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.
Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:
If the user must interact for something to fire, search engines probably aren’t seeing it.
Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
*John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
*Screaming Frog tests show a correlation to five seconds to render content.
*The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.
How to make sure Google and other search engines can get your content
1. TEST
The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.
Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.
Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).
All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:
“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”
Consider testing and reviewing through the following:
Confirm that your content is appearing within the DOM.
Test a subset of pages to see if Google can index content.
Manually check quotes from your content.
Fetch with Google and see if content appears.
Fetch with Google supposedly occurs around the load event or before timeout. It's a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.
After you’ve tested all this, what if something's not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.
2. HTML SNAPSHOTS
What are HTmL snapshots?
HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).
Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they're important to understand, because in certain situations they're necessary.
If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.
At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).
Considerations
When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.
A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:
“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.” – Google Developer AJAX Crawling FAQs
Benefits
Despite the considerations, HTML snapshots have powerful advantages:
Knowledge that search engines and crawlers will be able to understand the experience.
Certain types of JavaScript may be harder for Google to grasp (cough... Angular (also colloquially referred to as AngularJS 2) …cough).
Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
Site latency
When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.
The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → "get everything above-the-fold in front of the user, ASAP."
Critical Rendering Path - Optimized Rendering Loads Progressively ASAP:
Image source
However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).
Render-blocking JavaScript – Solutions
If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:
Inline: Add the JavaScript in the HTML document.
Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
Defer: By placing JavaScript lower within the HTML.
!!! Important note: It's important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.
Read more: Google Developer’s Speed Documentation
TL;DR - Moral of the story
Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.
Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
Text
JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience
Posted by alexis-sanders
Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.
The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?
Fundamentals
What is JavaScript?
When creating a modern web page, there are three major components:
HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
JavaScript – JavaScript is the interactivity and a core component of the dynamic web.
Learn more about webpage development and how to code basic JavaScript.
Image sources: 1, 2, 3
JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.
JavaScript libraries and frameworks:
What is AJAX?
AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess "AJAJ" doesn’t sound as clean as "AJAX" [pun intended]).
A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.
One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).
Image source
What is the Document Object Model (DOM)?
As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.
The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.
The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.
The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.
Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.
Simple example with a <title> tag that is populated through JavaScript:
HTML source
DOM
What is headless browsing?
Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.
PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).
Why can JavaScript be challenging for SEO? (and how to fix issues)
There are three (3) primary reasons to be concerned about JavaScript on your site:
Crawlability: Bots’ ability to crawl your site.
Obtainability: Bots’ ability to access information and parse your content.
Perceived site latency: AKA the Critical Rendering Path.
Crawlability
Are bots able to find URLs and understand your site’s architecture? There are two important elements here:
Blocking search engines from your JavaScript (even accidentally).
Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.
Why is blocking JavaScript such a big deal?
If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).
Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.
The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.
!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.
Internal linking
Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.
Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.
Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.
URL structure
Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.
Not recommended:
The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of "#" in URLs.
Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name="fragment" content="!">).
Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image source
Recommended:
pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
Read more: Mozilla PushState History API Documents
Obtainability
Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).
At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.
Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:
If the user must interact for something to fire, search engines probably aren’t seeing it.
Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
*John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
*Screaming Frog tests show a correlation to five seconds to render content.
*The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.
How to make sure Google and other search engines can get your content
1. TEST
The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.
Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.
Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).
All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:
“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”
Consider testing and reviewing through the following:
Confirm that your content is appearing within the DOM.
Test a subset of pages to see if Google can index content.
Manually check quotes from your content.
Fetch with Google and see if content appears.
Fetch with Google supposedly occurs around the load event or before timeout. It's a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.
After you’ve tested all this, what if something's not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.
2. HTML SNAPSHOTS
What are HTmL snapshots?
HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).
Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they're important to understand, because in certain situations they're necessary.
If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.
At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).
Considerations
When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.
A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:
“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.” – Google Developer AJAX Crawling FAQs
Benefits
Despite the considerations, HTML snapshots have powerful advantages:
Knowledge that search engines and crawlers will be able to understand the experience.
Certain types of JavaScript may be harder for Google to grasp (cough... Angular (also colloquially referred to as AngularJS 2) …cough).
Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
Site latency
When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.
The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → "get everything above-the-fold in front of the user, ASAP."
Critical Rendering Path - Optimized Rendering Loads Progressively ASAP:
Image source
However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).
Render-blocking JavaScript – Solutions
If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:
Inline: Add the JavaScript in the HTML document.
Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
Defer: By placing JavaScript lower within the HTML.
!!! Important note: It's important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.
Read more: Google Developer’s Speed Documentation
TL;DR - Moral of the story
Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.
Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
Text
JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience
Posted by alexis-sanders
Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.
The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?
Fundamentals
What is JavaScript?
When creating a modern web page, there are three major components:
HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
JavaScript – JavaScript is the interactivity and a core component of the dynamic web.
Learn more about webpage development and how to code basic JavaScript.
Image sources: 1, 2, 3
JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.
JavaScript libraries and frameworks:
What is AJAX?
AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess "AJAJ" doesn’t sound as clean as "AJAX" [pun intended]).
A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.
One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).
Image source
What is the Document Object Model (DOM)?
As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.
The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.
The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.
The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.
Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.
Simple example with a <title> tag that is populated through JavaScript:
HTML source
DOM
What is headless browsing?
Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.
PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).
Why can JavaScript be challenging for SEO? (and how to fix issues)
There are three (3) primary reasons to be concerned about JavaScript on your site:
Crawlability: Bots’ ability to crawl your site.
Obtainability: Bots’ ability to access information and parse your content.
Perceived site latency: AKA the Critical Rendering Path.
Crawlability
Are bots able to find URLs and understand your site’s architecture? There are two important elements here:
Blocking search engines from your JavaScript (even accidentally).
Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.
Why is blocking JavaScript such a big deal?
If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).
Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.
The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.
!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.
Internal linking
Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.
Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.
Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.
URL structure
Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.
Not recommended:
The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of "#" in URLs.
Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name="fragment" content="!">).
Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image source
Recommended:
pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
Read more: Mozilla PushState History API Documents
Obtainability
Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).
At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.
Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:
If the user must interact for something to fire, search engines probably aren’t seeing it.
Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
*John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
*Screaming Frog tests show a correlation to five seconds to render content.
*The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.
How to make sure Google and other search engines can get your content
1. TEST
The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.
Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.
Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).
All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:
“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”
Consider testing and reviewing through the following:
Confirm that your content is appearing within the DOM.
Test a subset of pages to see if Google can index content.
Manually check quotes from your content.
Fetch with Google and see if content appears.
Fetch with Google supposedly occurs around the load event or before timeout. It's a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.
After you’ve tested all this, what if something's not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.
2. HTML SNAPSHOTS
What are HTmL snapshots?
HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).
Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they're important to understand, because in certain situations they're necessary.
If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.
At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).
Considerations
When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.
A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:
“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.” – Google Developer AJAX Crawling FAQs
Benefits
Despite the considerations, HTML snapshots have powerful advantages:
Knowledge that search engines and crawlers will be able to understand the experience.
Certain types of JavaScript may be harder for Google to grasp (cough... Angular (also colloquially referred to as AngularJS 2) …cough).
Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
Site latency
When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.
The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → "get everything above-the-fold in front of the user, ASAP."
Critical Rendering Path - Optimized Rendering Loads Progressively ASAP:
Image source
However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).
Render-blocking JavaScript – Solutions
If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:
Inline: Add the JavaScript in the HTML document.
Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
Defer: By placing JavaScript lower within the HTML.
!!! Important note: It's important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.
Read more: Google Developer’s Speed Documentation
TL;DR - Moral of the story
Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.
Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes