#i still find difficult to get through the angular documentation
Explore tagged Tumblr posts
Text
One of the most difficult tasks for me when it comes to learn a new techonology is getting used to the syntax and start to properly use it. It’s even worst when it comes to documentation, it’s quite complex for me and I find it easier to just get my hands dirty and start doing stuff (and read the concepts involved). Also I recently found there are some book series that help me get through this process: the Head First series. I’m enjoying this reading and I think after reading the specific book from this series, I’ll be able to get a better understanding of the whole technology.
#tech#technology#coding#documentation#studyblr#computer science#i'm currently taking up angular#learning python#and getting better at sql#i still find difficult to get through the angular documentation#but i didn't find any head first book for it#instead i'm reading the one for java because i'm really enjoying it
10 notes
·
View notes
Link

Recently, a lot of individuals raise what they must learn to become a front-end developer. A way to learn front-end whereas there area unit such a lot of things to be told, everything changes thus quick, new technologies, new standard libraries virtually once a year. Once you begin reading articles and news concerning it, you’ll feel lost. Of these technologies React, Angular, Vue.js, jQuery, Javascript, even Bootstrap area unit stoning up everyplace. And at now, you most likely marvel however and wherever to begin and eventually become smart at it. In this article, I’ll try and answer this question, illustrate to you what front-end developers do on a routine and what steps you must fancy come through your goal, what’s essential, and what not most.
What is front-end Development and what front-end developer does?
Front-end Development is that the part of the application that we tend to as users will see and act with. From a lot of technical purposes of reading, it’s accountable for collecting knowledge from the user and spending it to back-end and for displaying back-end knowledge to the user. However, front-end developer has an extra task; he or she must implement designers concepts.
So, what skills will the person have to be compelled to be an honest front-end developer?
Start with HTML
The first talent (the essential one, really) that you would like to own is hypertext mark-up language (HyperText Markup Language), it’s necessary for the front-end of WordPress development company. Hypertext mark-up language creates the skeleton of our web site or application. It creates blocks, components like the menu, image, text, video, table, inputs, etc. however, the excellent news is that it won’t take an extended time to induce conversation in it. Once concerning one week of learning and active, you’ll be ready to produce the first project in a hypertext mark-up language. Bear in mind that hypertext mark-up language doesn’t apply colors and every one the sweetness to your components. Here comes consequent talent you would like to own.
Continue with CSS
The next factor that you certainly need to grasp is CSS(Cascading vogue Sheets); it sits progressing to be with you throughout all of your front-end careers, therefore higher get terribly at home with it. You’re proceeding to use CSS to feature your page to some trend. With CSS, you’ll produce all the positioning of components, provide them colors, alignment, fonts, sizes, margins, and even some animations. From one purpose of reading, CSS is extremely simple within the starting; however, once you attempt to master it, then you’ll see what percentage of things are activity behind that you had no plan concerning. Your approach for CSS ought to discover that you can learn new tricks throughout all of your life.
Now it’s the time to apply, having markup language and CSS you’ll produce stunning websites, dashboards, UI or UI components. A lot of you’ll produce, the higher your data can become.
At this time, it might be nice to require a more in-depth investigation of UI libraries like Bootstrap or UI linguistics or Foundation. Don’t be afraid; it’s enough to find one amongst them to understand the way to use all. I might suggest ranging from Bootstrap as a result of it’s the foremost widespread one, and many firms use it.
Also, at this time is an additional factor to say. You would like to search out what responsive style and media queries as a result of, in 2019, your needs to be mobile-friendly. Currently, we will move to the succeeding step.
Go deeper with Javascript
Now let’s start with more serious things. You should learn JavaScript if you want to increase the functionality of your websites. With javascript, you can add a lot of things like image sliders, forms validation, popups, tooltips and lots of other interactive elements. You can also create a connection to back-end and send data through API calls. If you face any problems during development, you can always look at the documentation or ask other Javascript community members on any Facebook group, Stackoverflow, or any programming forum.
It seems like that’s it? Unfortunately, we should learn a modern framework for front-end development. Let’s go to the next step.
Use JS frameworks
Now let’s begin with additional serious things. you’ve got to be told Javascript to feature additional practicality to your web site or application. With JS you’ll be able to add image sliders, forms validation, popups, tooltips and much of different interactive parts. you’ll be able to additionally produce an association to back-end and send information through API calls. Of course, you’ve got to apply tons after you got the Javascript data. If you face any issues throughout development, you’ll be abe to invariably inspect the documentation or raise different Javascript community members on any Facebook cluster, Stackoverflow, or any programming forum.
It sounds like that’s it? sadly not, currently, it’d be sensible to be told a number of the fashionable frameworks for front-end development. Let’s move to the consequent step.
ReactJS
It’s a component-based library created by Facebook; it’s a superb tool for building UI. ReactJS isn’t difficult, it shouldn’t be if you recognize Javascript without delay, and there’s a large community of developers and tones of sources that you might use to clarify everything you wish to grasp.
If you’d prefer to get accustomed to ReactJS, take a glance at the tutorials we’ve prepared:
Angular
It’s a really common front-end framework from Google and here is one difficult factor, to use Angular, you would like to update your information and add matter (it’s terribly just like Javascript. However, it’s typewritten and has some a lot of extra features). Angular conjointly has wonderful documentation, scores of resources, and a giant community.
Vue.js
It’s a replacement framework that gained immense quality; it’s a component-based framework. Creators of Vue.js tried to make it as easy as attainable in order that it should be the proper selection for the beginner. On the opposite hand, resources to find out aren’t terribly huge, and therefore the community is tiny, however growing.
Now it’s time for you to pick out. If you’re unsure, perhaps it’s an honest plan to create a take a look at the project with every one of them and check that one you’re feeling snug with. It’d seem to be it’s everything you would like to understand, however, there’s still another necessary factor. Let’s visit the last step.
Finish with Git
The final thing you must learn is skunk (Version management System); junior developers typically skip it, however virtually every WordPress development services provider uses it, and it’s unbelievably helpful to grasp; however, it works. It’s primarily used for the collaboration of developers on one code. You must begin learning skunk by putting in it on your machine. It’d be right for you to make your repo on Github or Bitbucket and store your code there. Take a glance at the documentation to establish|to be told} basic commands and find out what’s a branch, commit, or code review. It looks like that’s it.
Chris Holroyd
https://www.apzomedia.com/
Chris Holroyd is a tech magazine writer who has been extensively writing in the technology field for a few years. He has written several articles which have provided exciting and knowledgeable information on Digital marketing and tech support in United State.
via NiSu Auto Blogging
0 notes
Text
How To Create Better Angular Templates With Pug
About The Author
Zara Cooper is a software developer and technical writer who enjoys sharing what she learns as a developer with others. When she’s got time to spare, she enjoys … More about Zara …
Pug is a template engine that allows you to write cleaner templates with less repetition. In Angular, you can use Pug to write component templates and improve a project’s development workflow. In this article, Zara Cooper explains what Pug is and how you can use it in your Angular app.
As a developer, I appreciate how Angular apps are structured and the many options the Angular CLI makes available to configure them. Components provide an amazing means to structure views, facilitate code reusability, interpolation, data binding, and other business logic for views.
Angular CLI supports multiple built-in CSS preprocessor options for component styling like Sass/SCSS, LESS, and Stylus. However, when it comes to templates, only two options are available: HTML and SVG. This is in spite of many more efficient options such as Pug, Slim, HAML among others being in existence.
In this article, I’ll cover how you — as an Angular developer — can use Pug to write better templates more efficiently. You’ll learn how to install Pug in your Angular apps and transition existing apps that use HTML to use Pug.
Managing Image Breakpoints
A built-in Angular feature called BreakPoint Observer gives us a powerful interface for dealing with responsive images. Read more about a service that allows us to serve, transform and manage images in the cloud. Learn more →
Pug (formerly known as Jade) is a template engine. This means it’s a tool that generates documents from templates that integrate some specified data. In this case, Pug is used to write templates that are compiled into functions that take in data and render HTML documents.
In addition to providing a more streamlined way to write templates, it offers a number of valuable features that go beyond just template writing like mixins that facilitate code reusability, enable embedding of JavaScript code, provide iterators, conditionals, and so on.
Although HTML is universally used by many and works adequately in templates, it is not DRY and can get pretty difficult to read, write, and maintain especially with larger component templates. That’s where Pug comes in. With Pug, your templates become simpler to write and read and you can extend the functionality of your template as an added bonus. In the rest of this article, I’ll walk you through how to use Pug in your Angular component templates.
Why You Should Use Pug
HTML is fundamentally repetitive. For most elements you have to have an opening and closing tag which is not DRY. Not only do you have to write more with HTML, but you also have to read more. With Pug, there are no opening and closing angle brackets and no closing tags. You are therefore writing and reading a lot less code.
For example, here’s an HTML table:
<table> <thead> <tr> <th>Country</th> <th>Capital</th> <th>Population</th> <th>Currency</th> </tr> </thead> <tbody> <tr> <td>Canada</td> <td>Ottawa</td> <td>37.59 million</td> <td>Canadian Dollar</td> </tr> <tr> <td>South Africa</td> <td>Cape Town, Pretoria, Bloemfontein</td> <td>57.78 million</td> <td>South African Rand</td> </tr> <tr> <td>United Kingdom</td> <td>London</td> <td>66.65 million</td> <td>Pound Sterling</td> </tr> </tbody></table>
This is how that same table looks like in Pug:
table thead tr th Country th Capital(s) th Population th Currency tbody tr td Canada td Ottawa td 37.59 million td Canadian Dollar tr td South Africa td Cape Town, Pretoria, Bloemfontein td 57.78 million td South African Rand tr td United Kingdom td London td 66.65 million td Pound Sterling
Comparing the two versions of the table, Pug looks a lot cleaner than HTML and has better code readability. Although negligible in this small example, you write seven fewer lines in the Pug table than in the HTML table. As you create more templates over time for a project, you end up cumulatively writing less code with Pug.
Beyond the functionality provided by the Angular template language, Pug extends what you can achieve in your templates. With features (such as mixins, text and attribute interpolation, conditionals, iterators, and so on), you can use Pug to solve problems more simply in contrast to writing whole separate components or import dependencies and set up directives to fulfill a requirement.
Some Features Of Pug
Pug offers a wide range of features but what features you can use depends on how you integrate Pug into your project. Here are a few features you might find useful.
Adding external Pug files to a template using include.
Let’s say, for example, that you’d like to have a more succinct template but do not feel the need to create additional components. You can take out sections from a template and put them in partial templates then include them back into the original template.
For example, in this home page component, the ‘About’ and ‘Services’ section are in external files and are included in the home page component.
//- home.component.pugh1 Leone and Sonsh2 Photography Studio include partials/about.partial.puginclude partials/services.partial.pug
//- about.partial.pugh2 About our businessp Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
//- services.partial.pugh2 Services we offerP Our services include: ul li Headshots li Corporate Event Photography
HTML render of included partial templates example (Large preview)
Reusing code blocks using mixins.
For example, let’s say you wanted to reuse a code block to create some buttons. You’d reuse that block of code using a mixin.
mixin menu-button(text, action) button.btn.btn-sm.m-1(‘(click)’=action)&attributes(attributes)= text +menu-button('Save', 'saveItem()')(class="btn-outline-success")+menu-button('Update', 'updateItem()')(class="btn-outline-primary")+menu-button('Delete', 'deleteItem()')(class="btn-outline-danger")
HTML render of menu buttons mixin example (Large preview)
Conditionals make it easy to display code blocks and comments based on whether a condition is met or not.
- var day = (new Date()).getDay() if day == 0 p We’re closed on Sundayselse if day == 6 p We’re open from 9AM to 1PMelse p We’re open from 9AM to 5PM
HTML render of conditionals example (Large preview)
Iterators such as each and while provide iteration functionality.
ul each item in ['Eggs', 'Milk', 'Cheese'] li= item ul while n
(Large preview)
HTML renders of iterators example (Large preview)
Inline JavaScript can be written in Pug templates as demonstrated in the examples above.
Interpolation is possible and extends to tags and attributes.
- var name = 'Charles'p Hi! I’m #{name}. p I’m a #[strong web developer]. a(href='https://about.me/${name}') Get to Know Me
HTML render of interpolation example (Large preview)
Filters enable the use of other languages in Pug templates.
For example, you can use Markdown in your Pug templates after installing a JSTransformer Markdown module.
:markdown-it # Charles the Web Developer  ## About Charles has been a web developer for 20 years at **Charles and Co Consulting.**
HTML render of filter example (Large preview)
These are just a few features offered by Pug. You can find a more expansive list of features in Pug’s documentation.
How To Use Pug In An Angular App
For both new and pre-existing apps using Angular CLI 6 and above, you will need to install ng-cli-pug-loader. It’s an Angular CLI loader for Pug templates.
For New Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Generate your component according to your preferences.
For example, let’s say we’re generating a home page component:
ng g c home --style css -m app
Change the HTML file extension, .html to a Pug extension, .pug. Since the initial generated file contains HTML, you may choose to delete its contents and start anew with Pug instead. However, HTML can still function in Pug templates so you can leave it as is.
Change the extension of the template to .pug in the component decorator.
@Component({ selector: 'app-component', templateUrl: './home.component.pug', styles: ['./home.component.css']})
For Existing Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Install the html2pug CLI tool. This tool will help you convert your HTML templates to Pug.
npm install -g html2pug
To convert a HTML file to Pug, run:
html2pug -f -c [Pug file path]
Since we’re working with HTML templates and not complete HTML files, we need to pass the -f to indicate to html2pug that it should not wrap the templates it generates in html and body tags. The -c flag lets html2pug know that attributes of elements should be separated with commas during conversion. I will cover why this is important below.
Change the extension of the template to .pug in the component decorator as described in the For New Components and Projects section.
Run the server to check that there are no problems with how the Pug template is rendered.
If there are problems, use the HTML template as a reference to figure out what could have caused the problem. This could sometimes be an indenting issue or an unquoted attribute, although rare. Once you are satisfied with how the Pug template is rendered, delete the HTML file.
Things To Consider When Migrating From HTML To Pug Templates
You won’t be able to use inline Pug templates with ng-cli-pug-loader. This only renders Pug files and does not render inline templates defined in component decorators. So all existing templates need to be external files. If you have any inline HTML templates, create external HTML files for them and convert them to Pug using html2pug.
Once converted, you may need to fix templates that use binding and attribute directives. ng-cli-pug-loader requires that bound attribute names in Angular be enclosed in single or double quotes or separated by commas. The easiest way to go about this would be to use the -c flag with html2pug. However, this only fixes the issues with elements that have multiple attributes. For elements with single attributes just use quotes.
A lot of the setup described here can be automated using a task runner or a script or a custom Angular schematic for large scale conversions if you choose to create one. If you have a few templates and would like to do an incremental conversion, it would be better to just convert one file at a time.
Angular Template Language Syntax In Pug Templates
For the most part, Angular template language syntax remains unchanged in a Pug template, however, when it comes to binding and some directives (as described above), you need to use quotes and commas since (), [], and [()] interfere with the compilation of Pug templates. Here are a few examples:
//- [src], an attribute binding and [style.border], a style binding are separated using a comma. Use this approach when you have multiple attributes for the element, where one or more is using binding.img([src]='itemImageUrl', [style.border]='imageBorder') //- (click), an event binding needs to be enclosed in either single or double quotes. Use this approach for elements with just one attribute.button('(click)'='onSave($event)') Save
Attribute directives like ngClass, ngStyle, and ngModel must be put in quotes. Structural directives like *ngIf, *ngFor, *ngSwitchCase, and *ngSwitchDefault also need to be put in quotes or used with commas. Template reference variables ( e.g. #var ) do not interfere with Pug template compilation and hence do not need quotes or commas. Template expressions surrounded in remain unaffected.
Drawbacks And Trade-offs Of Using Pug In Angular Templates
Even though Pug is convenient and improves workflows, there are some drawbacks to using it and some trade-offs that need to be considered when using ng-cli-pug-loader.
Files cannot be included in templates using include unless they end in .partial.pug or .include.pug or are called mixins.pug. In addition to this, template inheritance does not work with ng-cli-pug-loader and as a result, using blocks, prepending, and appending Pug code is not possible despite this being a useful Pug feature.
Pug files have to be created manually as Angular CLI only generates components with HTML templates. You will need to delete the generated HTML file and create a Pug file or just change the HTML file extension, then change the templateUrl in the component decorator. Although this can be automated using a script, a schematic, or a Task Runner, you have to implement the solution.
In larger pre-existing Angular projects, switching from HTML templates to Pug ones involves a lot of work and complexity in some cases. Making the switch will lead to a lot of breaking code that needs to be fixed file by file or automatically using a custom tool. Bindings and some Angular directives in elements need to be quoted or separated with commas.
Developers unfamiliar with Pug have to learn the syntax first before incorporating it into a project. Pug is not just HTML without angle brackets and closing tags and involves a learning curve.
When writing Pug and using its features in Angular templates ng-cli-pug-loader does not give Pug templates access to the component’s properties. As a result, these properties cannot be used as variables, in conditionals, in iterators, and in inline code. Angular directives and template expressions also do not have access to Pug variables. For example, with Pug variables:
//- app.component.pug- var shoppingList = ['Eggs', 'Milk', 'Flour'] //- will workul each item in shoppingList li= item //- will not work because shoppingList is a Pug variableul li(*ngFor="let item of shoppingList")
Here’s an example with a property of a component:
//- src/app/app.component.tsexport class AppComponent{ shoppingList = ['Eggs', 'Milk', 'Flour'];}
//- app.component.pug //- will not work because shoppingList is a component property and not a Pug variableul each item in shoppingList li= item //- will work because shoppingList is a property of the componentul li(*ngFor="let item of shoppingList")
Lastly, index.html cannot be a Pug template. ng-cli-pug-loader does not support this.
Conclusion
Pug can be an amazing resource to use in Angular apps but it does require some investment to learn and integrate into a new or pre-existing project. If you’re up for the challenge, you can take a look at Pug’s documentation to learn more about its syntax and add it to your projects. Although ng-cli-pug-loader is a great tool, it can be lacking in some areas. To tailor how Pug will work in your project consider creating an Angular schematic that will meet your project’s requirements.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
Via http://www.scpie.org/how-to-create-better-angular-templates-with-pug/
source https://scpie.weebly.com/blog/how-to-create-better-angular-templates-with-pug
0 notes
Text
How To Create Better Angular Templates With Pug
About The Author
Zara Cooper is a software developer and technical writer who enjoys sharing what she learns as a developer with others. When she’s got time to spare, she enjoys … More about Zara …
Pug is a template engine that allows you to write cleaner templates with less repetition. In Angular, you can use Pug to write component templates and improve a project’s development workflow. In this article, Zara Cooper explains what Pug is and how you can use it in your Angular app.
As a developer, I appreciate how Angular apps are structured and the many options the Angular CLI makes available to configure them. Components provide an amazing means to structure views, facilitate code reusability, interpolation, data binding, and other business logic for views.
Angular CLI supports multiple built-in CSS preprocessor options for component styling like Sass/SCSS, LESS, and Stylus. However, when it comes to templates, only two options are available: HTML and SVG. This is in spite of many more efficient options such as Pug, Slim, HAML among others being in existence.
In this article, I’ll cover how you — as an Angular developer — can use Pug to write better templates more efficiently. You’ll learn how to install Pug in your Angular apps and transition existing apps that use HTML to use Pug.
Managing Image Breakpoints
A built-in Angular feature called BreakPoint Observer gives us a powerful interface for dealing with responsive images. Read more about a service that allows us to serve, transform and manage images in the cloud. Learn more →
Pug (formerly known as Jade) is a template engine. This means it’s a tool that generates documents from templates that integrate some specified data. In this case, Pug is used to write templates that are compiled into functions that take in data and render HTML documents.
In addition to providing a more streamlined way to write templates, it offers a number of valuable features that go beyond just template writing like mixins that facilitate code reusability, enable embedding of JavaScript code, provide iterators, conditionals, and so on.
Although HTML is universally used by many and works adequately in templates, it is not DRY and can get pretty difficult to read, write, and maintain especially with larger component templates. That’s where Pug comes in. With Pug, your templates become simpler to write and read and you can extend the functionality of your template as an added bonus. In the rest of this article, I’ll walk you through how to use Pug in your Angular component templates.
Why You Should Use Pug
HTML is fundamentally repetitive. For most elements you have to have an opening and closing tag which is not DRY. Not only do you have to write more with HTML, but you also have to read more. With Pug, there are no opening and closing angle brackets and no closing tags. You are therefore writing and reading a lot less code.
For example, here’s an HTML table:
<table> <thead> <tr> <th>Country</th> <th>Capital</th> <th>Population</th> <th>Currency</th> </tr> </thead> <tbody> <tr> <td>Canada</td> <td>Ottawa</td> <td>37.59 million</td> <td>Canadian Dollar</td> </tr> <tr> <td>South Africa</td> <td>Cape Town, Pretoria, Bloemfontein</td> <td>57.78 million</td> <td>South African Rand</td> </tr> <tr> <td>United Kingdom</td> <td>London</td> <td>66.65 million</td> <td>Pound Sterling</td> </tr> </tbody> </table>
This is how that same table looks like in Pug:
table thead tr th Country th Capital(s) th Population th Currency tbody tr td Canada td Ottawa td 37.59 million td Canadian Dollar tr td South Africa td Cape Town, Pretoria, Bloemfontein td 57.78 million td South African Rand tr td United Kingdom td London td 66.65 million td Pound Sterling
Comparing the two versions of the table, Pug looks a lot cleaner than HTML and has better code readability. Although negligible in this small example, you write seven fewer lines in the Pug table than in the HTML table. As you create more templates over time for a project, you end up cumulatively writing less code with Pug.
Beyond the functionality provided by the Angular template language, Pug extends what you can achieve in your templates. With features (such as mixins, text and attribute interpolation, conditionals, iterators, and so on), you can use Pug to solve problems more simply in contrast to writing whole separate components or import dependencies and set up directives to fulfill a requirement.
Some Features Of Pug
Pug offers a wide range of features but what features you can use depends on how you integrate Pug into your project. Here are a few features you might find useful.
Adding external Pug files to a template using include.
Let’s say, for example, that you’d like to have a more succinct template but do not feel the need to create additional components. You can take out sections from a template and put them in partial templates then include them back into the original template.
For example, in this home page component, the ‘About’ and ‘Services’ section are in external files and are included in the home page component.
//- home.component.pug h1 Leone and Sons h2 Photography Studio include partials/about.partial.pug include partials/services.partial.pug
//- about.partial.pug h2 About our business p Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
//- services.partial.pug h2 Services we offer P Our services include: ul li Headshots li Corporate Event Photography
HTML render of included partial templates example (Large preview)
Reusing code blocks using mixins.
For example, let’s say you wanted to reuse a code block to create some buttons. You’d reuse that block of code using a mixin.
mixin menu-button(text, action) button.btn.btn-sm.m-1(‘(click)’=action)&attributes(attributes)= text +menu-button('Save', 'saveItem()')(class="btn-outline-success") +menu-button('Update', 'updateItem()')(class="btn-outline-primary") +menu-button('Delete', 'deleteItem()')(class="btn-outline-danger")
HTML render of menu buttons mixin example (Large preview)
Conditionals make it easy to display code blocks and comments based on whether a condition is met or not.
- var day = (new Date()).getDay() if day == 0 p We’re closed on Sundays else if day == 6 p We’re open from 9AM to 1PM else p We’re open from 9AM to 5PM
HTML render of conditionals example (Large preview)
Iterators such as each and while provide iteration functionality.
ul each item in ['Eggs', 'Milk', 'Cheese'] li= item ul while n
(Large preview)
HTML renders of iterators example (Large preview)
Inline JavaScript can be written in Pug templates as demonstrated in the examples above.
Interpolation is possible and extends to tags and attributes.
- var name = 'Charles' p Hi! I’m #{name}. p I’m a #[strong web developer]. a(href='https://about.me/${name}') Get to Know Me
HTML render of interpolation example (Large preview)
Filters enable the use of other languages in Pug templates.
For example, you can use Markdown in your Pug templates after installing a JSTransformer Markdown module.
:markdown-it # Charles the Web Developer  ## About Charles has been a web developer for 20 years at **Charles and Co Consulting.**
HTML render of filter example (Large preview)
These are just a few features offered by Pug. You can find a more expansive list of features in Pug’s documentation.
How To Use Pug In An Angular App
For both new and pre-existing apps using Angular CLI 6 and above, you will need to install ng-cli-pug-loader. It’s an Angular CLI loader for Pug templates.
For New Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Generate your component according to your preferences.
For example, let’s say we’re generating a home page component:
ng g c home --style css -m app
Change the HTML file extension, .html to a Pug extension, .pug. Since the initial generated file contains HTML, you may choose to delete its contents and start anew with Pug instead. However, HTML can still function in Pug templates so you can leave it as is.
Change the extension of the template to .pug in the component decorator.
@Component({ selector: 'app-component', templateUrl: './home.component.pug', styles: ['./home.component.css'] })
For Existing Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Install the html2pug CLI tool. This tool will help you convert your HTML templates to Pug.
npm install -g html2pug
To convert a HTML file to Pug, run:
html2pug -f -c [Pug file path]
Since we’re working with HTML templates and not complete HTML files, we need to pass the -f to indicate to html2pug that it should not wrap the templates it generates in html and body tags. The -c flag lets html2pug know that attributes of elements should be separated with commas during conversion. I will cover why this is important below.
Change the extension of the template to .pug in the component decorator as described in the For New Components and Projects section.
Run the server to check that there are no problems with how the Pug template is rendered.
If there are problems, use the HTML template as a reference to figure out what could have caused the problem. This could sometimes be an indenting issue or an unquoted attribute, although rare. Once you are satisfied with how the Pug template is rendered, delete the HTML file.
Things To Consider When Migrating From HTML To Pug Templates
You won’t be able to use inline Pug templates with ng-cli-pug-loader. This only renders Pug files and does not render inline templates defined in component decorators. So all existing templates need to be external files. If you have any inline HTML templates, create external HTML files for them and convert them to Pug using html2pug.
Once converted, you may need to fix templates that use binding and attribute directives. ng-cli-pug-loader requires that bound attribute names in Angular be enclosed in single or double quotes or separated by commas. The easiest way to go about this would be to use the -c flag with html2pug. However, this only fixes the issues with elements that have multiple attributes. For elements with single attributes just use quotes.
A lot of the setup described here can be automated using a task runner or a script or a custom Angular schematic for large scale conversions if you choose to create one. If you have a few templates and would like to do an incremental conversion, it would be better to just convert one file at a time.
Angular Template Language Syntax In Pug Templates
For the most part, Angular template language syntax remains unchanged in a Pug template, however, when it comes to binding and some directives (as described above), you need to use quotes and commas since (), [], and [()] interfere with the compilation of Pug templates. Here are a few examples:
//- [src], an attribute binding and [style.border], a style binding are separated using a comma. Use this approach when you have multiple attributes for the element, where one or more is using binding. img([src]='itemImageUrl', [style.border]='imageBorder') //- (click), an event binding needs to be enclosed in either single or double quotes. Use this approach for elements with just one attribute. button('(click)'='onSave($event)') Save
Attribute directives like ngClass, ngStyle, and ngModel must be put in quotes. Structural directives like *ngIf, *ngFor, *ngSwitchCase, and *ngSwitchDefault also need to be put in quotes or used with commas. Template reference variables ( e.g. #var ) do not interfere with Pug template compilation and hence do not need quotes or commas. Template expressions surrounded in remain unaffected.
Drawbacks And Trade-offs Of Using Pug In Angular Templates
Even though Pug is convenient and improves workflows, there are some drawbacks to using it and some trade-offs that need to be considered when using ng-cli-pug-loader.
Files cannot be included in templates using include unless they end in .partial.pug or .include.pug or are called mixins.pug. In addition to this, template inheritance does not work with ng-cli-pug-loader and as a result, using blocks, prepending, and appending Pug code is not possible despite this being a useful Pug feature.
Pug files have to be created manually as Angular CLI only generates components with HTML templates. You will need to delete the generated HTML file and create a Pug file or just change the HTML file extension, then change the templateUrl in the component decorator. Although this can be automated using a script, a schematic, or a Task Runner, you have to implement the solution.
In larger pre-existing Angular projects, switching from HTML templates to Pug ones involves a lot of work and complexity in some cases. Making the switch will lead to a lot of breaking code that needs to be fixed file by file or automatically using a custom tool. Bindings and some Angular directives in elements need to be quoted or separated with commas.
Developers unfamiliar with Pug have to learn the syntax first before incorporating it into a project. Pug is not just HTML without angle brackets and closing tags and involves a learning curve.
When writing Pug and using its features in Angular templates ng-cli-pug-loader does not give Pug templates access to the component’s properties. As a result, these properties cannot be used as variables, in conditionals, in iterators, and in inline code. Angular directives and template expressions also do not have access to Pug variables. For example, with Pug variables:
//- app.component.pug - var shoppingList = ['Eggs', 'Milk', 'Flour'] //- will work ul each item in shoppingList li= item //- will not work because shoppingList is a Pug variable ul li(*ngFor="let item of shoppingList")
Here’s an example with a property of a component:
//- src/app/app.component.ts export class AppComponent{ shoppingList = ['Eggs', 'Milk', 'Flour']; }
//- app.component.pug //- will not work because shoppingList is a component property and not a Pug variable ul each item in shoppingList li= item //- will work because shoppingList is a property of the component ul li(*ngFor="let item of shoppingList")
Lastly, index.html cannot be a Pug template. ng-cli-pug-loader does not support this.
Conclusion
Pug can be an amazing resource to use in Angular apps but it does require some investment to learn and integrate into a new or pre-existing project. If you’re up for the challenge, you can take a look at Pug’s documentation to learn more about its syntax and add it to your projects. Although ng-cli-pug-loader is a great tool, it can be lacking in some areas. To tailor how Pug will work in your project consider creating an Angular schematic that will meet your project’s requirements.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/how-to-create-better-angular-templates-with-pug/ source https://scpie1.blogspot.com/2020/05/how-to-create-better-angular-templates.html
0 notes
Text
How To Create Better Angular Templates With Pug
About The Author
Zara Cooper is a software developer and technical writer who enjoys sharing what she learns as a developer with others. When she’s got time to spare, she enjoys … More about Zara …
Pug is a template engine that allows you to write cleaner templates with less repetition. In Angular, you can use Pug to write component templates and improve a project’s development workflow. In this article, Zara Cooper explains what Pug is and how you can use it in your Angular app.
As a developer, I appreciate how Angular apps are structured and the many options the Angular CLI makes available to configure them. Components provide an amazing means to structure views, facilitate code reusability, interpolation, data binding, and other business logic for views.
Angular CLI supports multiple built-in CSS preprocessor options for component styling like Sass/SCSS, LESS, and Stylus. However, when it comes to templates, only two options are available: HTML and SVG. This is in spite of many more efficient options such as Pug, Slim, HAML among others being in existence.
In this article, I’ll cover how you — as an Angular developer — can use Pug to write better templates more efficiently. You’ll learn how to install Pug in your Angular apps and transition existing apps that use HTML to use Pug.
Managing Image Breakpoints
A built-in Angular feature called BreakPoint Observer gives us a powerful interface for dealing with responsive images. Read more about a service that allows us to serve, transform and manage images in the cloud. Learn more →
Pug (formerly known as Jade) is a template engine. This means it’s a tool that generates documents from templates that integrate some specified data. In this case, Pug is used to write templates that are compiled into functions that take in data and render HTML documents.
In addition to providing a more streamlined way to write templates, it offers a number of valuable features that go beyond just template writing like mixins that facilitate code reusability, enable embedding of JavaScript code, provide iterators, conditionals, and so on.
Although HTML is universally used by many and works adequately in templates, it is not DRY and can get pretty difficult to read, write, and maintain especially with larger component templates. That’s where Pug comes in. With Pug, your templates become simpler to write and read and you can extend the functionality of your template as an added bonus. In the rest of this article, I’ll walk you through how to use Pug in your Angular component templates.
Why You Should Use Pug
HTML is fundamentally repetitive. For most elements you have to have an opening and closing tag which is not DRY. Not only do you have to write more with HTML, but you also have to read more. With Pug, there are no opening and closing angle brackets and no closing tags. You are therefore writing and reading a lot less code.
For example, here’s an HTML table:
<table> <thead> <tr> <th>Country</th> <th>Capital</th> <th>Population</th> <th>Currency</th> </tr> </thead> <tbody> <tr> <td>Canada</td> <td>Ottawa</td> <td>37.59 million</td> <td>Canadian Dollar</td> </tr> <tr> <td>South Africa</td> <td>Cape Town, Pretoria, Bloemfontein</td> <td>57.78 million</td> <td>South African Rand</td> </tr> <tr> <td>United Kingdom</td> <td>London</td> <td>66.65 million</td> <td>Pound Sterling</td> </tr> </tbody> </table>
This is how that same table looks like in Pug:
table thead tr th Country th Capital(s) th Population th Currency tbody tr td Canada td Ottawa td 37.59 million td Canadian Dollar tr td South Africa td Cape Town, Pretoria, Bloemfontein td 57.78 million td South African Rand tr td United Kingdom td London td 66.65 million td Pound Sterling
Comparing the two versions of the table, Pug looks a lot cleaner than HTML and has better code readability. Although negligible in this small example, you write seven fewer lines in the Pug table than in the HTML table. As you create more templates over time for a project, you end up cumulatively writing less code with Pug.
Beyond the functionality provided by the Angular template language, Pug extends what you can achieve in your templates. With features (such as mixins, text and attribute interpolation, conditionals, iterators, and so on), you can use Pug to solve problems more simply in contrast to writing whole separate components or import dependencies and set up directives to fulfill a requirement.
Some Features Of Pug
Pug offers a wide range of features but what features you can use depends on how you integrate Pug into your project. Here are a few features you might find useful.
Adding external Pug files to a template using include.
Let’s say, for example, that you’d like to have a more succinct template but do not feel the need to create additional components. You can take out sections from a template and put them in partial templates then include them back into the original template.
For example, in this home page component, the ‘About’ and ‘Services’ section are in external files and are included in the home page component.
//- home.component.pug h1 Leone and Sons h2 Photography Studio include partials/about.partial.pug include partials/services.partial.pug
//- about.partial.pug h2 About our business p Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
//- services.partial.pug h2 Services we offer P Our services include: ul li Headshots li Corporate Event Photography
HTML render of included partial templates example (Large preview)
Reusing code blocks using mixins.
For example, let’s say you wanted to reuse a code block to create some buttons. You’d reuse that block of code using a mixin.
mixin menu-button(text, action) button.btn.btn-sm.m-1(‘(click)’=action)&attributes(attributes)= text +menu-button('Save', 'saveItem()')(class="btn-outline-success") +menu-button('Update', 'updateItem()')(class="btn-outline-primary") +menu-button('Delete', 'deleteItem()')(class="btn-outline-danger")
HTML render of menu buttons mixin example (Large preview)
Conditionals make it easy to display code blocks and comments based on whether a condition is met or not.
- var day = (new Date()).getDay() if day == 0 p We’re closed on Sundays else if day == 6 p We’re open from 9AM to 1PM else p We’re open from 9AM to 5PM
HTML render of conditionals example (Large preview)
Iterators such as each and while provide iteration functionality.
ul each item in ['Eggs', 'Milk', 'Cheese'] li= item ul while n
(Large preview)
HTML renders of iterators example (Large preview)
Inline JavaScript can be written in Pug templates as demonstrated in the examples above.
Interpolation is possible and extends to tags and attributes.
- var name = 'Charles' p Hi! I’m #{name}. p I’m a #[strong web developer]. a(href='https://about.me/${name}') Get to Know Me
HTML render of interpolation example (Large preview)
Filters enable the use of other languages in Pug templates.
For example, you can use Markdown in your Pug templates after installing a JSTransformer Markdown module.
:markdown-it # Charles the Web Developer  ## About Charles has been a web developer for 20 years at **Charles and Co Consulting.**
HTML render of filter example (Large preview)
These are just a few features offered by Pug. You can find a more expansive list of features in Pug’s documentation.
How To Use Pug In An Angular App
For both new and pre-existing apps using Angular CLI 6 and above, you will need to install ng-cli-pug-loader. It’s an Angular CLI loader for Pug templates.
For New Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Generate your component according to your preferences.
For example, let’s say we’re generating a home page component:
ng g c home --style css -m app
Change the HTML file extension, .html to a Pug extension, .pug. Since the initial generated file contains HTML, you may choose to delete its contents and start anew with Pug instead. However, HTML can still function in Pug templates so you can leave it as is.
Change the extension of the template to .pug in the component decorator.
@Component({ selector: 'app-component', templateUrl: './home.component.pug', styles: ['./home.component.css'] })
For Existing Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Install the html2pug CLI tool. This tool will help you convert your HTML templates to Pug.
npm install -g html2pug
To convert a HTML file to Pug, run:
html2pug -f -c [Pug file path]
Since we’re working with HTML templates and not complete HTML files, we need to pass the -f to indicate to html2pug that it should not wrap the templates it generates in html and body tags. The -c flag lets html2pug know that attributes of elements should be separated with commas during conversion. I will cover why this is important below.
Change the extension of the template to .pug in the component decorator as described in the For New Components and Projects section.
Run the server to check that there are no problems with how the Pug template is rendered.
If there are problems, use the HTML template as a reference to figure out what could have caused the problem. This could sometimes be an indenting issue or an unquoted attribute, although rare. Once you are satisfied with how the Pug template is rendered, delete the HTML file.
Things To Consider When Migrating From HTML To Pug Templates
You won’t be able to use inline Pug templates with ng-cli-pug-loader. This only renders Pug files and does not render inline templates defined in component decorators. So all existing templates need to be external files. If you have any inline HTML templates, create external HTML files for them and convert them to Pug using html2pug.
Once converted, you may need to fix templates that use binding and attribute directives. ng-cli-pug-loader requires that bound attribute names in Angular be enclosed in single or double quotes or separated by commas. The easiest way to go about this would be to use the -c flag with html2pug. However, this only fixes the issues with elements that have multiple attributes. For elements with single attributes just use quotes.
A lot of the setup described here can be automated using a task runner or a script or a custom Angular schematic for large scale conversions if you choose to create one. If you have a few templates and would like to do an incremental conversion, it would be better to just convert one file at a time.
Angular Template Language Syntax In Pug Templates
For the most part, Angular template language syntax remains unchanged in a Pug template, however, when it comes to binding and some directives (as described above), you need to use quotes and commas since (), [], and [()] interfere with the compilation of Pug templates. Here are a few examples:
//- [src], an attribute binding and [style.border], a style binding are separated using a comma. Use this approach when you have multiple attributes for the element, where one or more is using binding. img([src]='itemImageUrl', [style.border]='imageBorder') //- (click), an event binding needs to be enclosed in either single or double quotes. Use this approach for elements with just one attribute. button('(click)'='onSave($event)') Save
Attribute directives like ngClass, ngStyle, and ngModel must be put in quotes. Structural directives like *ngIf, *ngFor, *ngSwitchCase, and *ngSwitchDefault also need to be put in quotes or used with commas. Template reference variables ( e.g. #var ) do not interfere with Pug template compilation and hence do not need quotes or commas. Template expressions surrounded in remain unaffected.
Drawbacks And Trade-offs Of Using Pug In Angular Templates
Even though Pug is convenient and improves workflows, there are some drawbacks to using it and some trade-offs that need to be considered when using ng-cli-pug-loader.
Files cannot be included in templates using include unless they end in .partial.pug or .include.pug or are called mixins.pug. In addition to this, template inheritance does not work with ng-cli-pug-loader and as a result, using blocks, prepending, and appending Pug code is not possible despite this being a useful Pug feature.
Pug files have to be created manually as Angular CLI only generates components with HTML templates. You will need to delete the generated HTML file and create a Pug file or just change the HTML file extension, then change the templateUrl in the component decorator. Although this can be automated using a script, a schematic, or a Task Runner, you have to implement the solution.
In larger pre-existing Angular projects, switching from HTML templates to Pug ones involves a lot of work and complexity in some cases. Making the switch will lead to a lot of breaking code that needs to be fixed file by file or automatically using a custom tool. Bindings and some Angular directives in elements need to be quoted or separated with commas.
Developers unfamiliar with Pug have to learn the syntax first before incorporating it into a project. Pug is not just HTML without angle brackets and closing tags and involves a learning curve.
When writing Pug and using its features in Angular templates ng-cli-pug-loader does not give Pug templates access to the component’s properties. As a result, these properties cannot be used as variables, in conditionals, in iterators, and in inline code. Angular directives and template expressions also do not have access to Pug variables. For example, with Pug variables:
//- app.component.pug - var shoppingList = ['Eggs', 'Milk', 'Flour'] //- will work ul each item in shoppingList li= item //- will not work because shoppingList is a Pug variable ul li(*ngFor="let item of shoppingList")
Here’s an example with a property of a component:
//- src/app/app.component.ts export class AppComponent{ shoppingList = ['Eggs', 'Milk', 'Flour']; }
//- app.component.pug //- will not work because shoppingList is a component property and not a Pug variable ul each item in shoppingList li= item //- will work because shoppingList is a property of the component ul li(*ngFor="let item of shoppingList")
Lastly, index.html cannot be a Pug template. ng-cli-pug-loader does not support this.
Conclusion
Pug can be an amazing resource to use in Angular apps but it does require some investment to learn and integrate into a new or pre-existing project. If you’re up for the challenge, you can take a look at Pug’s documentation to learn more about its syntax and add it to your projects. Although ng-cli-pug-loader is a great tool, it can be lacking in some areas. To tailor how Pug will work in your project consider creating an Angular schematic that will meet your project’s requirements.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/how-to-create-better-angular-templates-with-pug/ source https://scpie.tumblr.com/post/619383176685633536
0 notes
Text
How To Create Better Angular Templates With Pug
About The Author
Zara Cooper is a software developer and technical writer who enjoys sharing what she learns as a developer with others. When she’s got time to spare, she enjoys … More about Zara …
Pug is a template engine that allows you to write cleaner templates with less repetition. In Angular, you can use Pug to write component templates and improve a project’s development workflow. In this article, Zara Cooper explains what Pug is and how you can use it in your Angular app.
As a developer, I appreciate how Angular apps are structured and the many options the Angular CLI makes available to configure them. Components provide an amazing means to structure views, facilitate code reusability, interpolation, data binding, and other business logic for views.
Angular CLI supports multiple built-in CSS preprocessor options for component styling like Sass/SCSS, LESS, and Stylus. However, when it comes to templates, only two options are available: HTML and SVG. This is in spite of many more efficient options such as Pug, Slim, HAML among others being in existence.
In this article, I’ll cover how you — as an Angular developer — can use Pug to write better templates more efficiently. You’ll learn how to install Pug in your Angular apps and transition existing apps that use HTML to use Pug.
Managing Image Breakpoints
A built-in Angular feature called BreakPoint Observer gives us a powerful interface for dealing with responsive images. Read more about a service that allows us to serve, transform and manage images in the cloud. Learn more →
Pug (formerly known as Jade) is a template engine. This means it’s a tool that generates documents from templates that integrate some specified data. In this case, Pug is used to write templates that are compiled into functions that take in data and render HTML documents.
In addition to providing a more streamlined way to write templates, it offers a number of valuable features that go beyond just template writing like mixins that facilitate code reusability, enable embedding of JavaScript code, provide iterators, conditionals, and so on.
Although HTML is universally used by many and works adequately in templates, it is not DRY and can get pretty difficult to read, write, and maintain especially with larger component templates. That’s where Pug comes in. With Pug, your templates become simpler to write and read and you can extend the functionality of your template as an added bonus. In the rest of this article, I’ll walk you through how to use Pug in your Angular component templates.
Why You Should Use Pug
HTML is fundamentally repetitive. For most elements you have to have an opening and closing tag which is not DRY. Not only do you have to write more with HTML, but you also have to read more. With Pug, there are no opening and closing angle brackets and no closing tags. You are therefore writing and reading a lot less code.
For example, here’s an HTML table:
<table> <thead> <tr> <th>Country</th> <th>Capital</th> <th>Population</th> <th>Currency</th> </tr> </thead> <tbody> <tr> <td>Canada</td> <td>Ottawa</td> <td>37.59 million</td> <td>Canadian Dollar</td> </tr> <tr> <td>South Africa</td> <td>Cape Town, Pretoria, Bloemfontein</td> <td>57.78 million</td> <td>South African Rand</td> </tr> <tr> <td>United Kingdom</td> <td>London</td> <td>66.65 million</td> <td>Pound Sterling</td> </tr> </tbody> </table>
This is how that same table looks like in Pug:
table thead tr th Country th Capital(s) th Population th Currency tbody tr td Canada td Ottawa td 37.59 million td Canadian Dollar tr td South Africa td Cape Town, Pretoria, Bloemfontein td 57.78 million td South African Rand tr td United Kingdom td London td 66.65 million td Pound Sterling
Comparing the two versions of the table, Pug looks a lot cleaner than HTML and has better code readability. Although negligible in this small example, you write seven fewer lines in the Pug table than in the HTML table. As you create more templates over time for a project, you end up cumulatively writing less code with Pug.
Beyond the functionality provided by the Angular template language, Pug extends what you can achieve in your templates. With features (such as mixins, text and attribute interpolation, conditionals, iterators, and so on), you can use Pug to solve problems more simply in contrast to writing whole separate components or import dependencies and set up directives to fulfill a requirement.
Some Features Of Pug
Pug offers a wide range of features but what features you can use depends on how you integrate Pug into your project. Here are a few features you might find useful.
Adding external Pug files to a template using include.
Let’s say, for example, that you’d like to have a more succinct template but do not feel the need to create additional components. You can take out sections from a template and put them in partial templates then include them back into the original template.
For example, in this home page component, the ‘About’ and ‘Services’ section are in external files and are included in the home page component.
//- home.component.pug h1 Leone and Sons h2 Photography Studio include partials/about.partial.pug include partials/services.partial.pug
//- about.partial.pug h2 About our business p Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
//- services.partial.pug h2 Services we offer P Our services include: ul li Headshots li Corporate Event Photography
HTML render of included partial templates example (Large preview)
Reusing code blocks using mixins.
For example, let’s say you wanted to reuse a code block to create some buttons. You’d reuse that block of code using a mixin.
mixin menu-button(text, action) button.btn.btn-sm.m-1(‘(click)’=action)&attributes(attributes)= text +menu-button('Save', 'saveItem()')(class="btn-outline-success") +menu-button('Update', 'updateItem()')(class="btn-outline-primary") +menu-button('Delete', 'deleteItem()')(class="btn-outline-danger")
HTML render of menu buttons mixin example (Large preview)
Conditionals make it easy to display code blocks and comments based on whether a condition is met or not.
- var day = (new Date()).getDay() if day == 0 p We’re closed on Sundays else if day == 6 p We’re open from 9AM to 1PM else p We’re open from 9AM to 5PM
HTML render of conditionals example (Large preview)
Iterators such as each and while provide iteration functionality.
ul each item in ['Eggs', 'Milk', 'Cheese'] li= item ul while n
(Large preview)
HTML renders of iterators example (Large preview)
Inline JavaScript can be written in Pug templates as demonstrated in the examples above.
Interpolation is possible and extends to tags and attributes.
- var name = 'Charles' p Hi! I’m #{name}. p I’m a #[strong web developer]. a(href='https://about.me/${name}') Get to Know Me
HTML render of interpolation example (Large preview)
Filters enable the use of other languages in Pug templates.
For example, you can use Markdown in your Pug templates after installing a JSTransformer Markdown module.
:markdown-it # Charles the Web Developer  ## About Charles has been a web developer for 20 years at **Charles and Co Consulting.**
HTML render of filter example (Large preview)
These are just a few features offered by Pug. You can find a more expansive list of features in Pug’s documentation.
How To Use Pug In An Angular App
For both new and pre-existing apps using Angular CLI 6 and above, you will need to install ng-cli-pug-loader. It’s an Angular CLI loader for Pug templates.
For New Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Generate your component according to your preferences.
For example, let’s say we’re generating a home page component:
ng g c home --style css -m app
Change the HTML file extension, .html to a Pug extension, .pug. Since the initial generated file contains HTML, you may choose to delete its contents and start anew with Pug instead. However, HTML can still function in Pug templates so you can leave it as is.
Change the extension of the template to .pug in the component decorator.
@Component({ selector: 'app-component', templateUrl: './home.component.pug', styles: ['./home.component.css'] })
For Existing Components And Projects
Install ng-cli-pug-loader.
ng add ng-cli-pug-loader
Install the html2pug CLI tool. This tool will help you convert your HTML templates to Pug.
npm install -g html2pug
To convert a HTML file to Pug, run:
html2pug -f -c [Pug file path]
Since we’re working with HTML templates and not complete HTML files, we need to pass the -f to indicate to html2pug that it should not wrap the templates it generates in html and body tags. The -c flag lets html2pug know that attributes of elements should be separated with commas during conversion. I will cover why this is important below.
Change the extension of the template to .pug in the component decorator as described in the For New Components and Projects section.
Run the server to check that there are no problems with how the Pug template is rendered.
If there are problems, use the HTML template as a reference to figure out what could have caused the problem. This could sometimes be an indenting issue or an unquoted attribute, although rare. Once you are satisfied with how the Pug template is rendered, delete the HTML file.
Things To Consider When Migrating From HTML To Pug Templates
You won’t be able to use inline Pug templates with ng-cli-pug-loader. This only renders Pug files and does not render inline templates defined in component decorators. So all existing templates need to be external files. If you have any inline HTML templates, create external HTML files for them and convert them to Pug using html2pug.
Once converted, you may need to fix templates that use binding and attribute directives. ng-cli-pug-loader requires that bound attribute names in Angular be enclosed in single or double quotes or separated by commas. The easiest way to go about this would be to use the -c flag with html2pug. However, this only fixes the issues with elements that have multiple attributes. For elements with single attributes just use quotes.
A lot of the setup described here can be automated using a task runner or a script or a custom Angular schematic for large scale conversions if you choose to create one. If you have a few templates and would like to do an incremental conversion, it would be better to just convert one file at a time.
Angular Template Language Syntax In Pug Templates
For the most part, Angular template language syntax remains unchanged in a Pug template, however, when it comes to binding and some directives (as described above), you need to use quotes and commas since (), [], and [()] interfere with the compilation of Pug templates. Here are a few examples:
//- [src], an attribute binding and [style.border], a style binding are separated using a comma. Use this approach when you have multiple attributes for the element, where one or more is using binding. img([src]='itemImageUrl', [style.border]='imageBorder') //- (click), an event binding needs to be enclosed in either single or double quotes. Use this approach for elements with just one attribute. button('(click)'='onSave($event)') Save
Attribute directives like ngClass, ngStyle, and ngModel must be put in quotes. Structural directives like *ngIf, *ngFor, *ngSwitchCase, and *ngSwitchDefault also need to be put in quotes or used with commas. Template reference variables ( e.g. #var ) do not interfere with Pug template compilation and hence do not need quotes or commas. Template expressions surrounded in remain unaffected.
Drawbacks And Trade-offs Of Using Pug In Angular Templates
Even though Pug is convenient and improves workflows, there are some drawbacks to using it and some trade-offs that need to be considered when using ng-cli-pug-loader.
Files cannot be included in templates using include unless they end in .partial.pug or .include.pug or are called mixins.pug. In addition to this, template inheritance does not work with ng-cli-pug-loader and as a result, using blocks, prepending, and appending Pug code is not possible despite this being a useful Pug feature.
Pug files have to be created manually as Angular CLI only generates components with HTML templates. You will need to delete the generated HTML file and create a Pug file or just change the HTML file extension, then change the templateUrl in the component decorator. Although this can be automated using a script, a schematic, or a Task Runner, you have to implement the solution.
In larger pre-existing Angular projects, switching from HTML templates to Pug ones involves a lot of work and complexity in some cases. Making the switch will lead to a lot of breaking code that needs to be fixed file by file or automatically using a custom tool. Bindings and some Angular directives in elements need to be quoted or separated with commas.
Developers unfamiliar with Pug have to learn the syntax first before incorporating it into a project. Pug is not just HTML without angle brackets and closing tags and involves a learning curve.
When writing Pug and using its features in Angular templates ng-cli-pug-loader does not give Pug templates access to the component’s properties. As a result, these properties cannot be used as variables, in conditionals, in iterators, and in inline code. Angular directives and template expressions also do not have access to Pug variables. For example, with Pug variables:
//- app.component.pug - var shoppingList = ['Eggs', 'Milk', 'Flour'] //- will work ul each item in shoppingList li= item //- will not work because shoppingList is a Pug variable ul li(*ngFor="let item of shoppingList")
Here’s an example with a property of a component:
//- src/app/app.component.ts export class AppComponent{ shoppingList = ['Eggs', 'Milk', 'Flour']; }
//- app.component.pug //- will not work because shoppingList is a component property and not a Pug variable ul each item in shoppingList li= item //- will work because shoppingList is a property of the component ul li(*ngFor="let item of shoppingList")
Lastly, index.html cannot be a Pug template. ng-cli-pug-loader does not support this.
Conclusion
Pug can be an amazing resource to use in Angular apps but it does require some investment to learn and integrate into a new or pre-existing project. If you’re up for the challenge, you can take a look at Pug’s documentation to learn more about its syntax and add it to your projects. Although ng-cli-pug-loader is a great tool, it can be lacking in some areas. To tailor how Pug will work in your project consider creating an Angular schematic that will meet your project’s requirements.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/how-to-create-better-angular-templates-with-pug/
0 notes
Text
Augmented Reality in an Ionic/Angular PWA
Recently, I published a tutorial on using ThreeJS with Ionic to embed virtual reality content into a mobile application using WebVR. This tutorial is going to be somewhat similar, we will still be making use of ThreeJS and WebGL, but we will be creating an augmented reality experience in an Ionic/Angular application.
If you are unfamiliar with technologies like WebGL (which uses the GPU of a device to render 3D graphics on the web) and ThreeJS (a framework that makes using WebGL easier) it might be beneficial to read the previous article about WebVR first.
Introduction to Augmented Reality
Put simply, augmented reality uses technology to “virtually” change a real physical space. Unlike virtual reality which throws you into a fully artificial/simulated world, augmented reality adds to an existing space. Although an augmented reality experience isn’t limited to just the use of a camera, most AR projects available today involve using your device’s camera to view a physical space, and additional objects will be added to that space on the screen – your device acts as a ��window” into the augmented space.
This technology is still in its infancy, and I don’t think this is something people are really using in their day-to-day lives, but there are a few interesting examples out there already. Many people would have seen the use of augmented reality to add Pokémon to a real physical space in Pokémon GO which is a bit of fun, but then there are also projects like Google Lens which allows you find out more information about particular objects by using your device’s camera.
Web vs Native Augmented Reality
Apple are working on developing ARKit for native iOS applications, and Google are working on ARCore for native Android applications which will allow developers to provide augmented reality experiences in iOS and Android applications.
If you are building your Ionic applications with Cordova or Capacitor, then you can still access these Native APIs. Although comparisons are often framed as “native or hybrid”, it is important to note that when you build an Ionic application for iOS or Android it is a native application like any other – the difference is that “hybrid” applications use an embedded browser to display the web-powered user interface for the application. An application built with Ionic still has all the native tools that any other application has access to available to it, and that includes the ability to launch native views that use ARKit or ARCore.
We are talking about embedding dynamic 3D objects into a physical space in real-time here, so for experiences on the higher end of the spectrum, you are probably going to want to use these Native APIs. However, it is still entirely possible to create an augmented reality experience purely using web tech (just good old HTML, CSS, and JavaScript) – that includes integration with the camera and the rendering of 3D objects.
A downside of using Native APIs to provide an augmented reality experience is that it is platform specific. One of the big draws of Ionic and the web is that you can code an application once that will work everywhere. When you deviate away from the web and into platform-specific native integrations, you lose some of that portability.
The other benefit to using a web-based AR approach is that if you want to integrate the experience into the interface of an Ionic application, as opposed to just launching a full-screen native view, you will more easily be able to integrate a web-based AR approach with Ionic’s web-based UI. It is possible to mix native views with web UI, but unless it is just an overlay it can get messy (which is why I generally favour the web-based Google Maps JavaScript SDK over the Native SDK).
Whilst you’re probably not going to be building something like this using Web AR:
youtube
(although to be fair, this could be possible with WebAR for all I know)
There are still uses cases for a fully web-based AR experience, and since this tech is still in its infancy this is sure to grow in the future.
Building an Augmented Reality Experience in Ionic
In this tutorial, we are going to walk through an example of projecting a 3D ThreeJS scene built with A-Frame right into real life through your device’s camera. This will run completely through the web. We will be able to deploy this application as a PWA (Progressive Web Application) and access it directly through a devices web browser to activate the augmented reality functionality (assuming the device has a camera, of course).
We will be using a package called AR.js which was created by Jerome Etienne, which makes it absurdly easy to get an augmented reality experience up and running. You can literally just dump this 10 line example from the documentation into a web page:
<!doctype HTML> <html> <script src="https://aframe.io/releases/0.6.1/aframe.min.js"></script> <script src="https://cdn.rawgit.com/jeromeetienne/AR.js/1.5.0/aframe/build/aframe-ar.js"> </script> <body style='margin : 0px; overflow: hidden;'> <a-scene embedded arjs> <a-marker preset="hiro"> <a-box position='0 0.5 0' material='color: black;'></a-box> </a-marker> <a-entity camera></a-entity> </a-scene> </body> </html>
and you have a working demo. This demo (and the example we will be building) relies on using a marker image like this. You simply point your camera at the marker image and the 3D object or objects will be projected there. You can just display the image on your computer, phone, or you could print it out on paper if you like.
What we will be focusing on in this tutorial is how to get a similar example working well in an Ionic/Angular environment. When we are done, we will have something that looks like this:
You can view this demo directly yourself by clicking here. You don’t even need to be on your phone, if your computer has a webcam then it should run through that as well.
Create the 3D Scene
We can create the 3D scene that we want to display in augmented reality using A-Frame – which is basically a framework that makes using ThreeJS easier (which is a framework that makes using WebGL easier). A-Frame allows us to use simple HTML syntax to embed 3D objects into a scene. I will likely post a more in-depth tutorial about A-Frame in the future, but for now, we are just going to use it for a simple example.
We can trigger the behaviour of AR.js by adding the arjs attribute to the A-Frame scene, but it is important to note that it will attach its functionality to the <body> tag of the page. This makes it a little difficult to just add an A-Frame scene to one of the page’s templates in our Ionic/Angular application because it isn’t going to play nicely with the rest of the application.
This makes using an <iframe> an attractive option because we can just load our scene directly into the <iframe> and we can embed the frame wherever we need it in the application. We can just create a standard HTML file and save it as a local asset for the application, then we can just load that directly into the IFrame. We are going to create an example that is almost identical to the demo code – we are just going to tweak a couple of things so we have some more interesting objects.
Save the following file as aframe-ar.html in your assets folder
<!doctype HTML> <html> <script src="https://aframe.io/releases/0.6.1/aframe.min.js"></script> <script src="https://cdn.rawgit.com/jeromeetienne/AR.js/1.5.0/aframe/build/aframe-ar.js"> </script> <body style='margin : 0px; overflow: hidden;'> <a-scene embedded arjs> <a-marker preset="hiro"> <a-box position="-1 0.5 0" rotation="0 45 0" color="#4CC3D9"></a-box> <a-plane position="0 0 0" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane> </a-marker> <a-entity camera></a-entity> </a-scene> </body> </html>
This will create a simple 3D scene with a plane positioned at the bottom, and a box that is sitting on top of it. The AR functionality is completely self-contained in this frame, so there is no need to install or make any modifications to the Ionic/Angular application. All we will need to do is embed the IFrame somewhere.
Add the IFrame
You can embed the IFrame anywhere you like in your Ionic/Angular application (just make sure that you supply your static .html file from the assets folder as the src for the IFrame), but you will probably want to add a few styles to make it display more nicely (just add this to whatever component you are displaying the frame in):
iframe { position: absolute; width: 100%; height: 100%; border: none; }
For the example application, I created an ARLauncherPage that I launched as a modal to trigger the AR functionality:
ar-launcher.page.html
<ion-header> <ion-toolbar> <ion-title>ARLauncher</ion-title> <ion-buttons slot="end"> <ion-button (click)="close()"><ion-icon name="close" slot="icon-only"></ion-icon></ion-button> </ion-buttons> </ion-toolbar> </ion-header> <ion-content> <iframe src="/assets/aframe-ar.html"></iframe> </ion-content>
ar-launcher.page.ts
import { Component, OnInit, ViewEncapsulation } from '@angular/core'; import { ModalController } from '@ionic/angular'; @Component({ selector: 'app-ar-launcher', templateUrl: './ar-launcher.page.html', styleUrls: ['./ar-launcher.page.scss'], encapsulation: ViewEncapsulation.None }) export class ARLauncherPage implements OnInit { constructor(private modalCtrl: ModalController) { } ngOnInit() { } close(){ this.modalCtrl.dismiss(); } }
Deploy to the Web
The cool thing about AR.js is that it runs entirely with web tech, there is absolutely no native integration required which means we can just run it directly on the web (you could also deploy it as a native application if you wanted to).
If you would like to set the application up as a PWA (you don’t have to) you can follow the steps in this tutorial. Once you are ready to host it on the web, you can follow this tutorial to get it set up with Firebase Hosting (or you could host it wherever you prefer).
Summary
With your application hosted, all you need to do is go to the URL, launch the page that contains the AR functionality, and point your camera at the marker image.
Whilst AR.js �� and augmented reality on the web in general – is still under development, it is exciting to see the kinds of things we are already able to do with the web today.
via joshmorony – Learn Ionic & Build Mobile Apps with Web Tech https://ift.tt/2MaPYbH
0 notes
Text
GLA Supplements Show Assurance For Fat burning.
Every person could follow Hollywood for the current diet plan patterns, however our company are actually rather certain Hollywood complies with Victoria Beckham. I'm finding the diet regimen edge from that okay actually, have been adhering to great stuff a lot of the moment so I delight in with that said, and also I've discovered I have actually been obtaining fuller quicker. The Mayonnaise diet plan's concept is actually that you can eat method a lot more however absorb fewer calories as a result of your major food items teams being actually fruit, grains and vegetables. The Revelation: Yet another web site co-founded through a hubby-and-wife group, Paleo Pornography is actually home to numerous drool-worthy pictures and also dishes, alongside a listing of Paleo-friendly bistros as well as a handy 'Is it Paleo?' food manual. GH besides marketing muscle mass development additionally helps the body immune system, markets lipolysis which. is weight loss, and reduces the levels of tension hormones that could cause you to put on weight. Undoubtedly eating eggs alone is actually not a well-balanced technique to reduce weight as well as the severe variation of this diet is very harmful for health. Divide meat uniformly amongst 4 bowls and also put soup right into each dish, garnish with one piece of chili pepper and also provide. Along with a historic portion-controlled diet, Dr. Parker provides the planet's 1st low-carbohydrate Mediterranean diet: the Ketogenic Mediterranean Diet plan. And now that I know that I must change my diet regimen and exerice more frequently compared to I carry out ... these little bit of modifications will aid in the long run. The 3 intervention teams are actually 1) Medi diet regimen plus extra pure olive oil, 2) Medi diet plan plus extra tree almonds, as well as 3) low-fat American Soul Organization diet. This was actually a looking for from the research - the ordinary weight management along with the reduced carb diet regimen was actually 1.9 kilograms in 6 days; the typical fat loss with the slim diet regimen was actually 1.3 kg in 6 times - that is actually a difference from 46% if you desire to play the misleading relative amounts once more. The test food was actually white colored bagel (adjustable quantities), TWENTY g from butter, and also 200 g of extract. For the research study in the 'similar info' matching up low carb to low fat - just before individuals assume this means that you ought to throw away the reduced carb diet regimen. I strongly believe that if you teach as difficult as Matt carries out, then you will definitely possess the muscle mass as well as for that reason the improved metabolism to clear any kind of food you consume, even if that is actually a junk food thing like pizza. Within a hours from reading his theories about The Final Supper, deprivation and also his experiments and reviews with typically thin" individuals I really felt s SIGNIFICANT launch from worry, food obsession dismay and all the food crap travel luggage I 'd been actually taught with. . In the diet regimen sense, possibly a healthy-minded vegetarianism that possesses values as its own manner can be thought of even more like religious belief that needs to be respected regardless of whether you do not care about that on your own. It seems to pay attention to meat the uncloven unguis creature as meals or even fish versus poultry. Our experts've seen a variety of folks eating these diet beverages found along with serious irregular bowel movements calling for manual extraction. This would press the upper limit of human healthy protein endurance - too expensive for the weight upkeep phase and also reliant bring about an improvement in mood. Multiply your body weight by 12, if you are a male, as well as through 10 if you are actually a woman to recognize the amount of calorit's you need in a time to KEEP your weight. It is actually if the diet regimen sounds also great to be actually accurate. There is no cure-all, workout, or even strategy. He is actually the couathor, together with Adam Campbell, from Male's Wellness TNT Diet regimen: The Eruptive New Planning to Burst Excess fat, Build Muscular tissue, and Obtain Well-balanced in 12 Weeks (Rodale, 2007). If you've reduced weight and also started working out but your blood pressure is actually still in the high usual to Phase 1 hypertension variety (130-159 over 85-99), you might manage to carry that down some more by obtaining more from the trace element potassium, magnesium and calcium. The amount from healthy protein most likely stopped him off losing additional body weight as well as ins off his belly. Weight management was applauded, while weight increase meant I must stay a lot longer at the gym. I am merely uncertain that they cause notable, long-lasting weight-loss for most of individuals since there is actually no trusted, steady documentation to present this holds true. Crowe claims you must ensure you know exactly what you are actually purchasing as other protein and power bars are actually very likely to be higher in glucose and kilojoules and also not appropriate if you are actually aiming to drop weight. Matt additionally raised his workout level to 60 minutes a day varying in between cardio and also weightlifting. The earliest were actually a gent called Bob Jeffrey at the Educational institution of Minnesota talked to volunteers to put a sizable amount from loan of their own in danger and also claimed if you don't burn fat you will certainly lose all this money. Right now I need to loose weight yet I won't get any longer manual, tablet. there is actually obvious so as opposed to you go lady, I will certainly mention you go gals given that you may additionally be successful! Several of these recommendations I really love, like not eating well-balanced meals given that it's healthy and balanced but considering that you have various other engaging factors that are very important to you (might I advise taste?), and modifying just how you think about tempting foods. The investigators' objective for this group from over weight people was actually for reduction from 7% from physical body weight through diet regimen, exercise, and routine therapy sessions. Information showed the diet just group dropped 6% from preliminary body system weight, however cannot keep 5% weight-loss after an additional 6 months. Come back to regular blood sugar level checks when your diet plan or workout regimen changes. Enjoyable fact: once I decided to carry out specifically what I wished along with my body system as well as exactly how I supplied that, I possessed a lot opportunity to accomplish factors besides plan my diet regimen as well as bother with my weight all day that my life came to be filled up with awesomeness (like creating this blog, obtaining published in a journal and also doing a podcast ). My life is TECHNIQUE more meeting now compared to when I complied with the body-police policies. If you carry out have a body system mass mark in the obese assortment, speak with a general practitioner or a dietician or even other healthcare qualified regarding incredibly low energy diet plan as a choice. This additionally assists to boost the look of dimply skin - yet another concern that can easily occur after losing a lot of body weight. Regardless, http://blogpourhomme.fr/gouttes-hommes-hammer-of-thor-avis-prix/ seems that a very-low-carb diet could be among the best dietary techniques to nonalcoholic fatty liver disease. Even those that saw to it to add workout and diet also cannot view any type of end results.
The various other issue is actually that folks may do a lot of damage with their diet that is practically inconceivable to make up through exercise, claims Johnson. I presume all physical bodies are actually amazing - thin, fat deposits, impaired, fan, shapely, angular, apples, pears, carrots, oatmeal (what, thin folks can't possess food items body-descriptors?). They possessed suggestions for me to drop weight and also one was actually certainly not to cut calories that considerably, in order to get enough nutrients and also not to enter famine method. I accept, I have actually only begun my new eating habits (I refuse to phone it a diet regimen - sounds short-lived) as well as I' v dropped concerning 1.5 pounds a week ... really well-balanced.
0 notes
Text
Our 10 Favorite Open Source Resources From 2017
So many new projects are released every year and it’s tough to keep track. This is great for developers who benefit from the open source community. But it makes searching a lot more difficult.
I’ve scoured GitHub, organizing the best projects I could find that were released in 2017. These are my personal favorites and they’re likely to be around for quite a while.
Note that these are projects that were originally created during 2017. They each show tons of value and potential for growth. There may be other existing projects that grew a lot over 2017, but I’m hoping to focus on newer resources that have been gaining traction.
1. Vivify
The Vivify CSS library was first published to GitHub in late August 2017. It’s been updated a few times but the core goal of the library is pretty clear: awesome CSS effects.
Have a look at the current homepage and see what you think. It works much like the Animate.css library – except the features are somewhat limited. Yet, they also feel easier to customize.
There are a bunch of custom animations in here that I’ve never seen anywhere else. Things like paper folding animations, rolling out with fades and fast dives/swoops from all directions.
One of the best new libraries to use for modern CSS3 animations.
2. jQuery Steps
With the right plugins you can extend your forms with a bunch of handy UX features. Some of these may be aesthetic-only, while others can radically improve your form’s usability.
The jQuery Steps plugin is one such example. It was first released on April 19th and perhaps the coolest progress step plugin out there.
It’s super lightweight and runs with just a few lines of JS code (plus a CSS stylesheet).
Take a look at their GitHub repo for a full setup guide. It’s a lot easier than you might think and the final result looks fantastic.
Plus, the plugin comes with several options to customize the progress bar’s design.
3. Petal CSS
There’s a heavy debate on whether frontend frameworks are must-haves in the current web space. You certainly have a lot to pick from and they all vary so much. But one of my newest favorites is Petal CSS.
No doubt one of the better frameworks released in 2017, I’ve recommended this many times over the past year. I think it’s a powerful choice for minimalist designers.
It doesn’t force any certain type of interface and it gives you so much control over which features you want to use.
This can’t compete with the likes of Bootstrap…but thankfully it wasn’t designed to! For a small minimalist framework, Petal is a real treat.
4. Flex UI
The Flex UI Kit is another CSS framework released in 2017. This one’s a bit newer so it doesn’t have as many updates. But it’s still usable in real-world projects.
Flex UI stands out because it runs the entire framework on the flexbox property. This means that all of the responsive codes, layout grids and typography is structured using flexbox. No more floated elements and clearfix hacks with this framework.
I do find this a little more generic than the Petal framework, but it’s also a reliable choice. Have a look at the demo page for sample UI elements.
5. Sticky Sidebar
You can add sticky sidebars onto any site to increase ad views, keep featured stories while scrolling or even increase email signups through your opt-in form.
In May 2017, developer Ahmed Bouhuolia released this Sticky Sidebar plugin. It runs on pure JavaScript and uses custom functions to auto-calculate where the last item should appear, based on the viewer’s browser width.
The demo page has plenty of examples, along with guides for getting started. Anyone who’s into vanilla JS should give this a shot.
6. rFramework
Looking for another awesome startup framework for the web? rFramework might be worth your time since it’s fully semantic and plays nice with other libraries such as Angular.
To get started all you need are the two CSS & JS files – both of which you can pull from the GitHub repo. All the styles are pretty basic which makes this a great starting point for building websites without reworking your own code base.
Also take a look at their live page showcasing all of the core features that rFramework has to offer.
It may not seem like much now, but this has the potential to grow into a solid minimalist framework in the coming years.
7. NoobScroll
In mid-April 2017 NoobScroll was released. It’s a scrolling library in JavaScript that lets you create some pretty wacky effects with user scroll behaviors.
Have a look over the main page for some live demos and documentation. With this library you can disable certain scrollbars, create smooth scroll animations or even add a custom scroll bar into any element.
This is perfect for creating long flyout navigation on mobile screens. With this approach, you can have lengthy dropdown menus without having them grow too large.
8. jQuery Gantt
Tech enthusiasts and data scientists likely know about gantt charts – although they’re less common to the general public. This is typically a graphical representation of scheduling and it’s not something you usually find on the web.
jQuery Gantt is the first plugin of its kind, released on April 24, 2017. This has so many uses for booking, managing teams or even with SaaS apps that rely heavily on scheduling (ex: social media management tools).
It works in all modern browsers with legacy support for IE 11. You can learn more on their GitHub page, which also has setup docs for getting started.
9. Paroller.js
Building your own parallax site is easier now than ever before. And thanks to plugins like Paroller.js, you can do it in record time.
This free jQuery plugin lets you add custom parallax scrolling features onto any page element. You can target specific background photos, change the scroll speed, and even alter the direction between horizontal and vertical.
It’s a pretty solid plugin that still gets frequent updates. Have a look at their GitHub repo for more details.
10. Password Strength Meter
Last, but certainly not least on my list is this password strength plugin, created March 11, 2017. It’s built on jQuery and gets frequent updates for new features & bug fixes.
With this plugin you can change the difficulty rating for password complexity. Plus, you can define certain parameters like the total number of required uppercase letters or special characters.
If you’re interested in adding this to your own site, the GitHub repo is a nice place to start. The main demo page also has some cool examples you can test out.
But if you’re looking for more new open source projects, try searching GitHub to see what you find. The best resources often find a way of accruing stars, forks and social shares pretty fast.
from Web Designing https://1stwebdesigner.com/open-source-resources/
0 notes
Text
Evidence of the Surprising State of JavaScript Indexing
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we’ve seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I’ve drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that’s too susceptible to silent failures and falling out of date. We’ve seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it’s served in response to any fresh request.
I was fascinated by this piece of research published recently ��� you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
youtube
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It’s from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I’m not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they’re smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they’re called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There’s more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don’t do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they’re using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
via The Moz Blog http://ift.tt/2qybjAP
from WordPress AdWords Management Agency Marketing
0 notes
Text
Evidence of the Surprising State of JavaScript Indexing
Evidence of the Surprising State of JavaScript Indexing
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we've seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I've drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that's too susceptible to silent failures and falling out of date. We've seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it's served in response to any fresh request.
I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It's from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I'm not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they're smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they're called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There's more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don't do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they're using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
xem them tai http://ift.tt/2o9GYfe Evidence of the Surprising State of JavaScript Indexing xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để bi���t thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ Evidence of the Surprising State of JavaScript Indexing http://ift.tt/2qyfV9Y Bạn có thể xem thêm địa chỉ mua tai nghe không dây tại đây http://ift.tt/2mb4VST
0 notes
Link
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we've seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I've drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that's too susceptible to silent failures and falling out of date. We've seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it's served in response to any fresh request.
I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It's from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I'm not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they're smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they're called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There's more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don't do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they're using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
Text
Evidence of the Surprising State of JavaScript Indexing
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we’ve seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I’ve drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that’s too susceptible to silent failures and falling out of date. We’ve seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it’s served in response to any fresh request.
I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
youtube
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It’s from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I’m not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they’re smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they’re called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There’s more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don’t do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they’re using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
from Moz Blog https://moz.com/blog/evidence-of-the-surprising-state-of-javascript-indexing via IFTTT
from Blogger http://imlocalseo.blogspot.com/2017/05/evidence-of-surprising-state-of.html via IFTTT
from IM Local SEO https://imlocalseo.wordpress.com/2017/05/29/evidence-of-the-surprising-state-of-javascript-indexing/ via IFTTT
from Gana Dinero Colaborando | Wecon Project https://weconprojectspain.wordpress.com/2017/05/29/evidence-of-the-surprising-state-of-javascript-indexing/ via IFTTT
from WordPress https://mrliberta.wordpress.com/2017/05/29/evidence-of-the-surprising-state-of-javascript-indexing/ via IFTTT
0 notes
Text
Evidence of the Surprising State of JavaScript Indexing
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we've seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I've drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that's too susceptible to silent failures and falling out of date. We've seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it's served in response to any fresh request.
I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It's from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I'm not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they're smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they're called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There's more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don't do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they're using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2reyJPP via IFTTT
0 notes
Text
Evidence of the Surprising State of JavaScript Indexing
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we've seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I've drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that's too susceptible to silent failures and falling out of date. We've seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it's served in response to any fresh request.
I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It's from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I'm not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they're smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they're called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There's more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don't do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they're using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2rbR6Tb via SW Unlimited
0 notes
Text
Evidence of the Surprising State of JavaScript Indexing
Posted by willcritchlow
Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.
In this article, I want to explore some things we've seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I've drawn about how it must be working.
A brief introduction to JS indexing
At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.
There are some complexities even in this basic definition (answers in brackets as I understand them):
What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)
For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.
A high-level overview of my view of JavaScript best practices
Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.
Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that's too susceptible to silent failures and falling out of date. We've seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.
These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).
Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it's served in response to any fresh request.
I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:
Resources for auditing JavaScript
If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).
To do that, here are some resources I’ve found useful:
Justin again, describing the difference between working with the DOM and viewing source
The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
The console is where you can see errors and interact with the state of the page
As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
This post from Google’s John Mueller has a decent checklist of best practices
Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.
Some surprising/interesting results
There are likely to be timeouts on JavaScript execution
I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).
It may be more complicated than that, however. This segment of a thread is interesting. It's from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):
“Actually, we did care about this content. I'm not at liberty to explain the details, but we did execute setTimeouts up to some time limit.
If they're smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”
What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).
It matters how your JS is executed
I referenced this recent study earlier. In it, the author found:
Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot
The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they're called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:
It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There's more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.
CRO tests are getting indexed
When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:
For users:
CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
A cookie is then set to make sure that the user sees the same version if they revisit that page later
For Googlebot:
The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed
I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don't do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.
Split tests show SEO improvements from removing a reliance on JS
Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.
A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.
Conclusion: How JavaScript indexing might work from a systems perspective
Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:
Googlebot crawls and caches HTML and core resources regularly
Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also, in addition to adding pages to the index:
May make modifications to the link graph
May add new URLs to the discovery/crawling queue for Googlebot
The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:
“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.
Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”
This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.
My best guess is that they're using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.
Run a test, get publicity
I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes