Tumgik
russianseo · 7 years
Text
Yandex.Metrics launches new tool for monitoring the popularity of search engines and browsers
Tumblr media
Yandex.Metrics launches Yandex.Radar - a public tool for monitoring the popularity of search engines and browsers in Russia, Belarus, Kazakhstan and Turkey. It allows you to monitor the situation both as a whole and the changes in slices for specific platforms and device types. For example, you can see how the shares of search engines on the desktop or mobile browsers changed throughout the year.
"Precision and impartiality-these principles were our guide when working on Radar. We started with search engines and browsers - from markets that have a significant impact on the modern Internet. In the future the field of view of Yandex.Radar will be expanded. The goal of the project is to create a tool that displays the entire market of Internet services and technologies in Russia and other countries," says Viktor Tarnavsky, head of Yandex.Metrics.
Radar's reports are compiled on the basis of data on which search engines people get to sites using Yandex.Metrics, and in which browsers they are opened. According to research by W3Techs, Yandex.Metrics is the second most popular web analysis system in the world and, according to Ruward: Track research, is also the largest web analysis tool in Russia. In June 2017 Metrica registered 78.3% of the traffic within the .ru domain zone.
You can read more information about the principles of Yandex.Radar's work in the section "How it works" on the tool's website, and if you need to learn about Yandex.Metrics itself - in the article on the company's website. Follow the news about the service in the blog.
0 notes
russianseo · 7 years
Text
Yandex released a new machine learning library in public access
Tumblr media
Yandex has developed a new method of machine learning called CatBoost. It allows you to efficiently train models in line with heterogeneous data - such as user location, operation history and device type. The library of computer training CatBoost is released in public access, it can be used by all comers. To work with CatBoost, it's enough to install it on your computer. The library supports Linux, Windows and macOS operating systems and is available in Python and R programming languages. Yandex has also developed a CatBoost Viewer visualization program that allows you to monitor the learning process on the charts. You can download CatBoost and CatBoost Viewer on GitHub.
CatBoost is the heir of the Matrixnet machine learning method, which is used in almost all Yandex services. Like Matrixnet, CatBoost employs the mechanism of gradient boosting: it is well suited for working with heterogeneous data. But whereas Matrixnet teaches models on numerical data, CatBoost also takes into account the non-numerical ones, for example cloud types or types of buildings. Previously, such data had to be translated into the language of figures, which could change their essence and affect the accuracy of the model. Now they can be used in their original form. Thanks to this, CatBoost shows a higher quality of training than similar methods for working with heterogeneous data. It can be used in a variety of areas - from the banking sector to industrial needs.
Mikhail Bilenko, head of the of Yandex machine intelligence and research department:
"Yandex has been engaged in machine training for many years, and CatBoost was created by the best specialists in this field. By releasing the library CatBoost in open access, we want to contribute to the development of machine learning. I must say that CatBoost is the first Russian method of machine learning, which became available in the open source. We hope that the community of experts will appreciate it and will help to do even better."
The new method has already been tested on Yandex services. As part of the experiment, it was used to improve search results, to rank the Yandex.Den recommendations tape and to calculate the weather forecast in Meteum technology - and in all cases proved to be better than Matrixnet. In the future, CatBoost will work on other services. It is also used by the Yandex Data Factory team - for their solutions for the industry, in particular for optimizing raw material consumption and predicting defects.
In addition, CatBoost was implemented by the European Center for Nuclear Research (CERN): they are using it to combine data obtained from different parts of the LHCb detector.
0 notes
russianseo · 7 years
Text
Recommendations for the correct presentation of the site within Yandex search
Tumblr media
Additional information that can be sent to the search can be different. Some part of it is connected directly to the ranking of the site - the better the search will understand the purpose of the resource, its audience, the more correctly it will rank it. The other part is related to the visual representation of the site in the search - this is an area to which you can actively influence. In this article, we will discuss in detail how to tell the search, which region the site belongs to, how to provide additional information for the site's snippet, and how to properly tell the search about how useful and interesting the site is to users.
Information about the region of the site
The assignment of the right region to the site is difficult to overestimate. Pizza in Vladivostok can be terrifically tasty, however, it's completely useless for a user from Moscow or St. Petersburg who wants to get his pizza hot and not in a week. The same with delivery - the user chooses the store and the site from which he will quickly and conveniently get the selected product.
The search very much appreciates the information about the region of the site, therefore it tries to collect it independently - through the analysis of the site content, however, no one knows better than the owner what region his site works in or what region can be useful for its development. It is better if the webmasters themselves will indicate this information.
There are three ways to do this:
The easiest way is to specify the region via Webmaster. To do this, go to the "Regionality" section in the Webmaster, select the desired region, specify a link to the page with contacts (other regional information), and save the settings. A feature of adding regionality via Webmaster is that for one site you can assign only one region.
As for businesses declaring that they are delivered throughout the country, around the world or throughout the universe - practice shows that many overestimate their capabilities. Ambitiousness is good, but if you are not a big player in the market and can not serve the users with the same speed and quality in Krasnodar and Kazan, it is better not to specify the whole of Russia for the region, but focus on the region where you work most.
The second way is to specify the region through Yandex. Its difference is that through the directory there is a confirmation no longer of the site, but of the organization, and the assignment of regions is taking into account the location of the branches of the organization. Advantages of the Directory are that in addition to the ability to specify different regions (including branches), the organization begins to appear on Maps.
You can add an organization to the Directory yourself, or confirm the rights to it, if it has already been found and entered there by Yandex itself. Verification takes place on the phone - from the call center to the number of the organization and via transmitting the code, which you then need to enter into a special form in the Directory, thereby confirming the rights. Furthermore, it allows the owner of the organization to monitor the relevance of the presented information, add or change photos, collect customer reviews and respond to them, and collect statistics - how many people were interested in the organization, what they were looking for next and so on.
In addition to the already familiar statistics on impressions and clicks, the restarted Directory manages to collect requests for headings, puts these data on a map and notices competitors there, showing statistics not only for a particular organization, but also for organizations of competitors, allowing the business to compare indicators and improve.
The third way to add a region is to specify it through Yandex.Catalog. Adding through this tool is also quite simple, but it's worth remembering two features:
Only 7 regions can be assigned through the Catalog
The queue for moderation
Adding a site to Yandex.Catalog is free, but in order to speed up the moment of entering the site, you can pay for moderation. It is also important to remember that, both for free and for paid moderation, the application can be rejected. Therefore, even before the addition, it is important to make sure that the site complies with the rules of the Catalog.
Yandex recommends using all three ways in order to assign the site to as many regions as possible (if necessary).
Site submission within the search
The presentation of the site in the search is closely related to the snippet. Snippet is a description of the site page in the SERP. The snippet should be treated as an instrument for obtaining traffic.
The snippet has demonstrations, clicks and CTR, and every webmaster can and must analyze these metrics to understand how the snippets of different pages are displayed in the search and whether they satisfy the users. To do this, the Webmaster has a special section called "Search queries", where you can specify the most interesting requests and see how your site appears in the tops, how snippets are formed and what their CTR is:
How to affect the snippet? Above are the simplest snippets - Webmasters and news agencies. As you can see, a simple snippet consists of a header, a website address and a description. Here the webmaster can influence almost everything.
The title and description can be retrieved from both the title and description service tags, as well as from the text on the page itself, from the micro-markup, or from the description in the Yandex.Catalog. To understand where the search took any particular information for the snippet from, you need to go through the search to the page of interest, open its code, and search by code to find the exact combination that was specified in the snippet. With great probability you will find where this information was pulled from.
If suddenly there is no information on the page that is in the snippet, there are two options: the description is taken from the Yandex.Catalog, or the information from the page has been deleted. Yandex does not invent anything and takes the information for a snippet solely from those sources. The presence of a text on a page can be checked by opening a saved copy of the page.
To correct the description in the snippet, you can fix it on the page and wait for reindexing.
Unfortunately, there is no answer to the question - "What should be the ideal snippet?". But we need to remember that the snippet is an entity directly related to the query.
A good kind of snippet is completely unrelated to the query, because it is quite obvious that the user already knows what the Sportmaster is, he is interested in whether there is a branch of this store at the necessary metro station. Here we see that the snippet does not solve the user's problem in any way, because it is completely unclear if the information he requires is on that same page. It is all the more frustrating in this case, because this snippet leads to an excellent landing page of the desired store in Prague, with the exact address, map, hours of work and contact phone. The problem is that standard title and description are specified for the page, which are specified automatically for the pages of this site.
Here are the template title and description for all pages, as well as the strange repetition of what is written in the title, also in the description, as well as the habit of completely leaving them blank - and these are the main reasons for the appearance of "bad", idle snippets within the search.
How to affect the favicon? Create an image with a size of 16 Ă— 16 pixels (the preferred format is ICO, but other formats are possible: GIF, JPEG, PNG and BMP), place the favicon.ico file in the root directory of the site, give a link to it in the <head> element.
The register of the site name also may be influenced. If the site name consists of a set of words, then it is possible and necessary to improve the readability with a different case of letters. This can be done in Yandex.Webmaster in the "Register of the site name" section and by specifying a new spelling there. It is important for the user to see exactly where in the section of the site, he will get by clicking on this page. Yandex forms navigation chains automatically. What influences their formation?
First, and foremost - a clear structre of the site. Therefore, it is very important for site owners to make sure that the site has a good menu, good directories and logically arranged ways to search for the selected product or section.
How to influence quick links in the site's snippet? Unfortunately – you cannot. Quick links are formed automatically by Yandex, based on the data collected on the site, from the signal that the sections on the site are most interesting to users. It is impossible to initiate or speed up the process of the appearance of quick links in the snippets of your site
For those sites for which quick links have already been generated, it is possible to choose which ones to show and which ones not to show through the Webmaster's office.
The organization address specified in the snippet is taken from the Directory and this process is also fully automated
All that was told above is what a webmaster or site owner can influence things right now and on their own. The rest of the possibilities are already associated with micro-markup - the transfer of search to certain structured data. Yandex has affiliate programs that allow Yandex to transmit such data, icluding:
goods and prices
Recipes
evaluation and feedback
and others
Affiliate programs are divided into two categories - one related to the direct submission of the site on the search, and other with the presentation of the site in other, parallel Yandex search, for example, in the search for the Market.
In conclusion, I want to say that the presentation of the site on the search today is impossible without considering mobile search. A good adaptation of the site to mobile format is accounted for by the search in the ranking, so it is recommended to use different types of site preparation for perception from mobile devices, as well as a turbo page.
0 notes
russianseo · 7 years
Text
Yandex: You can get rid of Baden-Baden only by completely getting rid of SEO-texts
Tumblr media
Since the introduction of the Baden-Baden pessimizing algorithm, a message regarding the violation in the Webmaster was received by several thousand sites.
As the Yandex team reports, the sites that got rid of SEO-texts were freed from the ranking restrictions. The total search traffic of these sites has recovered to its previous level.
As you can see, sites that have completely removed SEO-texts, without damage to themselves, are quickly restoring their positions. In addition, the graph shows the total search traffic, which indicates that after the removal of SEO-texts there is no subsidence in either Yandex or Google.
The Yandex team insists that in order to get rid of the infringement connected with the use of SEO-texts, and related restrictions, it is necessary to completely get rid of SEO-texts on the site:
"We see that many webmasters who receive a report about the violation begin to change the content of their pages on their sites - reduce the number of keywords, use other semantic constructions, experiment with markup tags. However, the essence of the texts remains the same - they are not written for people, but for search robots. So, the corresponding violation remains."
Keep in mind, the Baden-Baden algorithm, which detects unnatural texts, was created to influence search results with the help of key phrases and it was introduced in March 2017. It was then that the results of the algorithm began to be used in ranking, and in April - to determine the presence of a violation associated with the use of SEO-texts on the website.
0 notes
russianseo · 7 years
Text
When something is broke within Yandex issuance - lifehacks
Tumblr media
Lately many of features that were working just right for years and that came to be favored by SEO-specialists began to disappear from Yandex issuance. They disappear for various reasons, for example, as a result of systematic cutting of the query language and other tools for exploring the search distribution. But it also happens that the nature of the obviously incorrect operation of one or another delivery mechanism is unclear and everything seems to be a banal bug or glitch. In today's article, I want to consider a number of similar examples and offer alternative options for obtaining important analytical information.
Lifehack 1. Learning the document’s age without the get-parameter how=tm
One of the most popular is the task of establishing the age of the page in Yandex. Previously, it was solved by using the get-parameter to the URL of the Yandex search page how=tm - sorting the output by the time of the document. In the snippet of the issue that was formed in this way, the date for each document was specified, which was identified as the age of the document from Yandex’ point of view. For some time now, there is no indication of the date of the document in the issuance whenever using such sorting (with the exception of the standard indication of "freshness" of documents from the "quick-robot" impurity), although the sorting itself seems to continue to be performed correctly. However, there is an alternative opportunity to learn the age of the document from Yandex’ point of view. To do this, you need to use the Yandex.XML service and retrieve the value of the <modtime> parameter for the page in the format YYYYMMDDThhmmss ISO 8601: 2004:
Lifehack 2. Getting a saved textual copy of the document
For at least a month there has been another strangeness in the Yandex issuance. It is known that you can get to the page of the full version of this saved copy from the issuance page under the link "Saved copy": On which, in turn, there is a link to a text copy of the saved file: However, recently, when clicking on the link "View text copy" we do not get any text version of the saved copy, but we are left hanging on the full version. That is, the full version of the saved file refers to itself. It is still possible to receive a text copy. To do this, add the parameter &cht=1 to the full version of the saved copy. And here it is: The question arises - is it a bug or a feature? Link to the saved text was simply lost or deliberately hidden?
Lifehack 3. Site: operator alternative
In early April of this year, many SEO specialists noticed the incorrect work of the Yandex documented site: operator (for example, it was actively discussed on the Searchengines.guru forum). The number of documents found using the site: operator was clearly not true and was much lower than the real value. Moreover, documents could be issued from other sites. And although just a few days ago the work of site: operator was corrected, it would not be superfluous to recall the alternative way of obtaining all documents from the site, including all its subdomains - to the rhost: operator, which is also still documented. With the help of this operator during the incorrect work of the site: operator, it was possible to obtain the necessary data.
Lifehack 4. Returning the settings for managing personal search
Not so long ago, Yandex users noticed that in some cases in the "Search Results Settings" mode the ability to manage personal search settings disappears (this problem was also discussed in the Searchengines.Guru forums): It turns out that the option of personal search settings has disappeared only for authorized users. In order to be able to manage them, you need to log out (or, for example, go into private browser mode): However, Yandex claims that personal search only works for authorized users. That is, in theory, if the user is not authorized, then the presence of checkmarks in the personal search settings should not affect anything. But reinsurance never hurts.
0 notes
russianseo · 7 years
Text
Yandex introduced the unmanned vehicle project
Tumblr media
The Yandex company posted on YouTube a video demonstrating the prototype of an unmanned vehicle. As noted in the company's blog, the prototype was developed as part of the Yandex.Taxi project based on Toyota Prius. Prototype will test the software for unmanned vehicles. The car is equipped with a powerful computer based on the Nvidia GTX processor, as well as Yandex' own developments in navigation, geolocation and machine learning, including real-time navigation display, a computer vision and object recognition system.
Dmitry Polishchuk, Yandex.Taxi Self-driving Project Manager:
"Unmanned vehicles will grow to become a revolutionary mode of transportation, which we will all face in the coming decades. At the moment, dozens of companies around the world are creating their own unmanned vehicles, but only a few of them have key components to turn the project into a reality. These components include a set of reliable technologies and algorithms, technical knowledge and resources, as well as access to the market for self-managed vehicles. Yandex.Taxi, with the support of Yandex, is one of the few players who own all of the above."
The ultimate goal of the project is to develop a "fifth level" autopilot that does not require any human participation in driving. Yandex.Taxi hopes to start testing the prototype on public roads next year. The company does not only plan on independent use of technologies, but also selling those technologies to automakers.
0 notes
russianseo · 7 years
Text
Yandex launches Yandex.Zen for content creators
Tumblr media
Yandex announced the launch of Yandex.Zen for content creators who wish to be published in the service recommendations feed. Daniel Trabun, the project's media director, told about this during the YaC conference.
The platform for publishers, brands and authors allows everyone to create their own channel or publication in Yandex.Zen and publish their materials there. Now Zen is not only a distribution platform, but also a platform for creating content.
The platform will allow publications to monetize their content and receive referrals to their sites. To measure the transitions, this morning a mark was added to all links with Ya.Zen, which allows you to estimate the volume of transitions. In terms of the volume of website transitions, Zen beat all social networks:
From today, anyone can register in Yandex.Zen, open their own channel and start creating their content with the help of a special editor.
Publishers will be able to monetize their content by connecting direct sales and advertising networks to their channel in Ya.Zen. Brands that are describing their projects in an interesting way, will be able to post free advertising on Ya.Zen. As for independent authors, they will be able to connect advertisements to their publications and receive money for viewing. Also, to encourage authors, Yandex will reward the best of them monthly with grants. During 2017, Yandex wants to distribute $1 million to independent authors.
Particularly for mobile storytelling, the Zen developers created the format of materials called "Narrative". Such a format consists of several screens with different content, which the user can swipe at his own discretion and at his own pace.
This format will be available to Zen users this summer.
0 notes
russianseo · 7 years
Text
Promotion of sites within Yandex regional issuance in 2017
Tumblr media
Do you need links when promoting in the regions? How to promote subdomains? What strategy suits the online store? How to submit information to Yandex.Register? The article answers the main questions on promotion in the regions.
Author: Evgeny Shestakov, head of promotion at Rush Agency, a regular participant of the Baltic Digital Days conference, tells how to implement the promotion in the regions in practical terms. Examples, step-by-step instructions, justification - everything you need for your work.
Classification of requests in Yandex. Geo-dependent (further GD) and geo-independent (GI).
Ranking factors in the regions. Which groups of factors work well, and which are not very good.
Strategies for promoting regional extradition - what to choose: subdomains or folders?
Why Yandex?
Unlike Google, in Yandex, georeferencing of the domain and individual documents is of enormous importance. Without a correctly assigned region, you should not count on ranking by geo-dependent competitive queries in your chosen region.
Russia is one of the largest countries in the world and has many regions and subregions (in terms of the search system), the ranking and competition in which can differ drastically.
Geo-dependent requests
These are queries for which search results will differ depending on the region where the query is entered.
Examples:
buy a refrigerator,
order pizza,
Taxi,
Children's orthopedic bags for girls.
Issue on request [buy refrigerator], introduced from Moscow differs from the issue on request [buy a refrigerator], introduced from Vladivostok. In the second example, regional subdomains made by large players specially for Vladivostok are clearly visible.
Geo-independent requests
It should be noted immediately that the vast majority of information and navigation requests are geo-independent. We will only consider transactional (commercial) queries with a toponym (city or geo-referencing directly in the query).
Examples:
buy a refrigerator in Moscow,
order a pizza in Vladivostok,
Taxi Kursk.
Issuance on request [buy a refrigerator in Moscow], introduced from Moscow is very similar to issuance on the same request entered from Vladivostok. The top inside was mixed, but inside the TOP-10 there were the same sites. You can independently verify the requests for geo-dependence by comparing the issuance within different regions.
Oddities and "glitches" in the definition of geo-dependence in Yandex
Obviously, absolutely everything cannot work as it should and is conceived. In particular, there are many "glitches" in determining the geo-dependence of queries:
Obviously, a geo-dependent commercial request without a toponym (according to sound logic it should be just such) can be geo-independent.
A query with an exact indication of a city or another georeference in the query itself may suddenly turn out to be geo-dependent.
Also, some quirks display queries that contain, for example, "online" or other words indicating that the region where the request is entered and the region where the service is provided are not important.
Example: "online loans". The issue is filled with aggregators of microfinance organizations and comparison sites. If you have a lot of queries with such markers in the semantic kernel, I highly recommend checking them for geo-dependence, in order to choose the right strategy for promotion in the regions correctly.
How to minimize errors when checking geo-dependency requests
Try to avoid the most personalization from Yandex: use a browser with a clean cache, cookies and incognito mode.
Check issuance on demand in at least three regions - the more regions you compare, the greater the accuracy and the less chance to make a mistake with the definition of the type of request.
Yandex Ranking Factors in the Regions
Text factors are the main signal for the search in regions.
Well-optimized Title tags: you need the maximum coverage of the most frequent cluster requests, also do not forget about synonyms and re-wording of queries.
The presence of the relevant non-spam header H1. It is enough to have one master cluster key in H1. No need to spam and add to the headline commercial words like: "buy", "price", "inexpensive". Online stores should pay special attention to H1 on the cards of the goods: it is necessary to leave only a significant part of the product name - without unnecessary specifications and articles.
The presence of optimized text. Since the competition in the regions (even in cities with a million population) is much lower than in Moscow, the requirements for the level of text optimization are much lower as well. A text with the entry of the main query and all the words of the frequency requests of the cluster is enough, you do not need to spam! It is also worth noting that online stores should pay attention to the advisability of writing text on category pages to begin with - in some categories the text is contraindicated and can bring down your positions. To find out for sure – to write or not to write, check the TOP and look at competitors or use the available text analyzers (Just Magic or Rush Analytics).
Various tables and lists for resources dedicated to services. In some topics there are elements of layout, so ranking without them is almost impossible. As an example - the usual tablets with prices in the "concrete" topic and a list of works with prices in the "repair of apartments" topic. 
The presence of requests in the links to product cards for listings of online stores.
Basically, having worked on these factors, you can already count on a good ranking in the regional issuance.
Correct site structure and distribution of requests
In regions, it is especially important to properly allocate requests to pages from the beginning and build a SEO-oriented site structure. Unlike Moscow, this can be done once and that is it.
Main aspects:
It is important to look at the issuance, separate commercial requests from information and optimize the documents accordingly. For example, inquiries "mascara" and "buy mascara " should be promoted on different pages, since the first request is informative, and the second one is commercial.
By correctly combining queries into one cluster and concentrating the maximum text relevance for each group of requests on a separate page, you can get into the TOP at the time of indexing of the new document. Tag pages and slices for demand guarantee TOP in regions, especially for online stores.
Power BI for SEO - how to conveniently control the visibility of the site
How to use the free tools of Power BI and Google Sheets to build a convenient platform for tracking visibility changes on the collected semantics in the context of categories, subcategories and the site as a whole. It should be understood that in addition to the subdomains of large, well-optimized players and aggregators, many small regional players with weak optimization are represented in the regional tops as well. By making separate pages under certain slices on demand, regional players are easily determined in the issuance in line with the requests you are interested in.
Do you need links in the regions?
Very often representatives of regional projects ask me: "What about links? Where to buy them and how many?" Before answering this question, I would like to remind you that now we are talking exclusively about Yandex.
Even in cities with a population of one million, the competition for sufficiently frequent requests is quite low. Documents in the top may not have links at all. Why should we buy, if others do not? The answer is obvious - there is no need.
Most regional tops are easily conquered with proper internal optimization and the correct allocation of requests. So why should we affect the dynamic reference factors, if there is enough textual relevance? The answer is no reason; we do not need an anchor reference at all.
It is necessary to take into account the fact that Minusinsk, following the regional subdomain, which is pumped by SEO-links, will immediately punish the main domain. Ready to take the risk? Go ahead and take them.
What about the behavioral factors in the regions?
This group of factors begins to make a significant contribution to the ranking only if the search has enough data about the behavior of users. To put it simpler - requests in the ranking of which a great contribution is made by behavioral factors, should have a solid frequency - hundreds and thousands "by quotation marks".
Most requests in the regions have such a small frequency that the influence of behavioral factors on them is simply invisible.
Mass "spam" capturing of regions through subdomains or false addresses will not bring anything good. If you could still assign a lot of regions to your project, where you cannot yet serve customers, or your conditions are not competitive (there is no courier delivery, only mail) - your site will eventually drop in the issuance. Unlike document and query-document factors, for behavioral ones (that are calculated for the entire site (hosted)), the search has enough data to understand that your site does not provide the right services to customers, and they have a negative experience. Think about your customers, not just about traffic.
Based on paragraph 1, the wrap-up of behavioral factors in the regions is devoid of any meaning. Moreover, any attempts to affect behavioral factors by unnatural methods entail very strict sanctions on the part of search engines.
Commercial factors in the regions
For several years, the influence of commercial factors has been increasing in Moscow issuance, while in the regions the situation is the opposite.
The extremely low influence of most commercial factors in regional extradition was noted. Until now, for many frequent and competitive requests, you can see sites of extremely poor quality, but with good, and sometimes spammy, text optimization.
Commercial factors are definitely worthwhile to introduce in the regions, but not for search, yet for users and conversion.
Another argument in favor of commercial factors is to do "in advance". If the influence of this group of factors in the regions increases, you will be ready for this, and your competitors will not.
Strategies for promotion in the regional issuance of Yandex
What is important and what is worth paying special attention to:
Maximum attention to georeference! In case there is no georeference to the desired region - no ranking. Whatever your beautiful structure and internal optimization may be, you will never be ranked by geo-dependent queries in the right region if it is not tied.
You do not need to bind regions in form of "City + Area" and "Region". In the first case, you do not get a good ranking for geo-dependent queries either in the city or in the region. In the second case, you will be ranked only by area, but by not by the city. Most traffic is concentrated in cities. To rank well both in the city and in the region - it is always necessary to connect the district center. For example, to rank both in Ekaterinburg and in the Sverdlovsk region, you need to assign the site to the "Ekaterinburg" region.
You need to correctly choose an exit strategy in the regions - make subdomains or folders. We will discuss this in more detail below.
It is necessary to correctly perform internal optimization for GEO requests, so that they are both well ranked - and not to overfill them.
The strategy of using Yandex.Catalogue WITHOUT SUBDOMAINS
The strategy is meant for projects that are being promoted in no more than 7 regions, and is more suitable for commercial sites and service sites than for online stores.
Pros:
Quickly link regions.
Ease of implementation - you do not need to create subdomains.
Ease of promoting geo-dependent queries.
Cons and limitations:
Very strong restrictions on the promotion of GI requests, since listing of toponyms in metatags and other areas of the document is spam. The cloning of the structure is inevitable.
For online stores, the main structure can be promoted by GI only for one region. To promote GI requests to other regions, you need to completely clone the structure on folders. This technique entails problems with both the indexing of the cloned structure and its ranking.
To assign regions through the Yandex.Catalogue, it is necessary to provide Yandex.Catalogue moderators with documents on renting premises in each of the assigned regions, which also entails certain restrictions.
Recently, the number of regional subdomains in issuance has drastically increased. Large players and aggregators are actively and successfully emerging into the regions using a strategy with subdomains.
Subtlety of strategy implementation:
In the Yandex Catalog we link the 7 priority regions and get the "Russia" region the eighth, the bonus one. Region "Russia" is not strictly tied, it is a "technical" region and you should not spend a useful "slot" on it.
It is best to register a site in the Yandex.Catalogue on an individual. Moderators, when considering an application for the addition of regions, require that the accompanying documents on the leasing of premises in the regions are drawn up for the same legal entity from which the application for adding the site to the Yandex.Catalogue was submitted. In many cases this may not be very convenient. Or very uncomfortable.
If you order rental documents on "popular" forums, ask to make color copies right away! Moderators can easily reject black and white copies of documents!
Yandex.Catalogue - THE KING! It is necessary in each of the assigned regions to add a separate organization to Yandex.Catalogue.
Do not try to fool the moderators of Yandex. Be sure to notify the employees of the call center that they will get a call from Yandex.Research and that they need to confirm that you really have an office in this region and it is really at the address specified in the Directory. If you do not do this, and the moderators will understand everything, then you can always get banned in Yandex.Catalogue and you will have to make subdomains.
For each region on the site there should be a separate page of contacts with relevant meta tags, the title of H1 and embedded via the Yandex Map API.
For each region, you need to buy a local phone and indicate it on the appropriate contact page. The advantage will be a single 8-800 phone.
Be sure to fill out all the fields in the Yandex.Catalogue, including the local number, for each organization!
Promoting GI requests when using a strategy WITHOUT SUBDOMAINS
The GI of the main region is always promoted via main catalog: occurrences of the region name (only the main one) via Title and text.
The GI of the remaining regions is promoted via a separate cluster (effective for commercial sites): the name of the region via the Title, !URL of the folder, H1 and texts.
You can use a good template text with substitution of toponyms in the text and other areas of the document. If you use a template text, all occurrences of queries and the text itself must be consistent and not have "curves" of occurrences and an incorrect word order.
Strategy using SUBDOMAINS
Suitable for projects that are being promoted in more than 7 regions. Ideal for online stores, hypermarkets, aggregators.
Pros:
It is easy enough to bind regions for subdomains.
Unlimited opportunities for promoting both GD and GI requests together.
Easily scales to an unlimited number of regions.
Lately there are more and more regional subdomains in the top, which makes this strategy even more attractive.
Cons:
Labor-intensive and expensive development and support of infrastructure. On large projects, a person in the company is needed to maintain and improve the infrastructure.
It is difficult to track bugs and "breakdowns" in optimization on a large number of subdomains. Scripts break, people make mistakes. Errors are inevitable.
Often the "breakdown" on one subdomain extends to all others, which can lead to catastrophic consequences for organic traffic.
Subtlety of strategy implementation:
The main domain is always bound to the main (starting) region. In very rare cases, it makes sense to change the binding of the primary domain in favor of a more competitive region. These actions can lead to a serious loss of traffic and require a lot of nuances.
The remaining regions are tied according to the principle of "1 sub-domain - 1 region". Regions connect simply through Yandex Webmaster
For each subdomain, an organization is bound to Yandex.Catalogue. IMPORTANT! The organization is connected namely to the subdomain!
Each subdomain creates its own page of contacts with regional telephones, addresses and Yandex Maps. From the contacts page of the main domain, links to the subdomain contact pages are placed.
For each regional subdomain, you need a local phone on the contacts page and in the header / footer of the site. An 8-800 number will be an advantage.
Name of region = name of subdomain. No need to spam, creating subdomains of the fourth level, and even more so the entry of keys, for example, buy-fridges.ekaterinburg.site.ru. This will not give any bonus in the ranking, this is spam, which, most likely, will not be ranked at all.
Advancement of GI requests with a strategy using SUBDOMAINS
Under the GI, the main catalog is promoted, together with the GD requests from the given region.
GI-instructions are required in all areas of the document: Title, Description, H1, in text or other content.
In fact, the promotion of a subdomain is the promotion of a regular site in one region. Simple? - Yes, it is. 
Our cases of website promotion in the regions
In general, we bring large Moscow players to the regions, but there are cases with very small sites that wanted to expand their geography.
Promotion of an online clothing store WITHOUT USING SUBDOMAINS in five regions
Traffic growth of 200-300%
Promotion of a site of services WITHOUT USING SUBDOMAINS in seven regions
Growth of traffic - 400%!
Promotion of a hypermarket in 20 regions USING SUBDOMAINS
Traffic growth - 440%
Promotion of a hypermarket in 30+ regions USING SUBDOMAINS
Growth of traffic - 500%
Conclusion
For many businesses, coming to the regions is a great growth point. Moreover, unlike the promotion from scratch, this is not such a difficult task, if everything is done correctly. Over the years, competition in the regions will only grow: local players begin to engage in optimization, aggregators actively make subdomains, new aggregators grow like mushrooms after the rain. So, I recommend that you enter the regions right now and take as much traffic as possible, while it is easy and not very expensive.
0 notes
russianseo · 7 years
Text
Yandex released an anti-reoptimized texts filter
Tumblr media
The Yandex.Search team released a statement claiming that they reworked and significantly enhanced the algorithm that is determining reoptimized website pages. It is part of the overall ranging algorithm and it may lead to the overall deterioration of positions of the reoptimized pages within the search results. 
“We are advising the webmasters to reread all the website pages and get rid of all the senseless and brutal texts in order to avoid the painful consequences. Because of the obvious reasons, we decided to name our new algorithm of determining text spam Baden-Baden” – the official Yandex webmasters blog release is stating. 
Since the very launch of Minusinsk Yandex was not making any serious attempts to “surprise” the webmasters with new possibilities in terms of deterioration of web pages within the issuance. And now, besides the Minusinsk exile, plenty of web pages will have an opportunity to relax within the Baden-Baden.
We are reminding you Evgeniy Tarasov was warning us about the possible release of such a filter in his article a year ago:
“And her, it appears, the medal is about to turn into a pyramid, and its third side will be a new text filer. A kind of a textual Minusinsk or perhaps it will be named Baku – to honer the city, where the “Iskra” magazine was secretly printed – it does not matter. The most important thing is the fact that it will hit the 2 dollar copywriting and the website that are made for Miralinsk and everyone who is preoccupied with filling web pages with useless content for the sake of extending the semantics. Such a filter is going to cut down all the content projects that are not offering any sufficient value and its release is imminent.”
Well, Evgeniy could not guess the name of the filter. So let us welcome Bade-Baden, ladies and gentlemen!
0 notes
russianseo · 7 years
Text
Machine learning via Yandex search or how Matrixnet is organized
Tumblr media
The user is arriving to the search engine website, posts his request and the task of the search engine is in providing the top most relevant documents in line with the request. The documents that can be attributed under the given request are very numerous and there are billions of them within the index, even after filtering it all, there are still millions of them online And you need to get all of those millions in perfect order. To help you actually compile the ranging formula, you will be able to benefit from machine learning and namely – Matrixnet, personal algorithm of gradient Yandex boosting.
Matrixnet is a gradient boosting on the trees of solutions that is supporting all the main features – classifications, multiclassifications, regressions, ranging and so on. There are more complex features – the combination of all the above. Our department is developing new features for the needs of similar departments and internal Yandex users will be capable of adding their own features as well. 
Matrixnet is  also capable of working with lost and misplaced values – if the value of some factor is not indicated, it is not going to be an issue. Besides that, Matrixnet learning can be launched on a cluster – it is a redistributed algorithm. It is important, since the learning options within the search engine are too big to fit into the operating memory of a single server – this is why it is important to perform redistributed learning. 
Applying Matrixnet within Yandex
In Yandex Matrixnet is being applied all over the place. First of all, within the search engine. Matrixnet was initially written for the search engine – it is a fact. Secondly, it is being applied within the advertising in order to demonstrate the users all the most interesting ads for them, predicting the number of clicks on the ads. Thirdly, weather report in Yandex is being developed in line with the Matrixnet formula. The algorithm is also being applied in the external Yandex projects – YDF, within the system of recommendations Yandex.Dzen, for looking for bots, permission for homonymy, user segmentation and so on.
Matrixnet peculiarities 
Right now there are several gradient boosting algorithms available for public access, so this is why I will tell you how Matrixnet is different. Its important peculiarity is the fact that you barely need to select its parameters. Why?   When the Matrixnet was being written, it was tested in line with a number of different learning selections (pools) in such a way so that it would provide great quality for all, this is why the new data sets are also giving great quality. Matrixnet is not only easy to use because it barely required the selection of parameters, but also because Yandex has the infrastructure, which is allowing to launch learning literally in a single click (see more info below). Matrixnet is winning in terms of quality over all the other gradient boosting algorithms on the trees of solutions within the regression mode. 
Matrixnet is offering a fairly optimized learning feature. It is important for all the Yandex tasks, but mainly for the search, Although we do have large learning selections, we cannot allow the formula to learn for a month, since the overall quality will be suffering in the end of it all. This is why various optimizations are being applied, both algorithmic and low level along with the ones that are pressuring the net. Using the Matrixnet formula is also very optimized (in just one second the formula in a single current could be applied for 100000 documents).
Gradient boosting on the trees of solutions 
Trees of solutions are a structure of data – a binary tree, where all the data nodes, aside from the sheet ones, have gradation in line with a certain factor or number and the sheet heights have numbers. This is how you can apply such a tree to a document:  
Gradient boosting – it is a sum of simple models (tress of solutions in this case) each and every single one of which is enhancing the result of the previous combination. 
Matrixnet – represents the none arbitrary trees of solutions and the so-called “oblivious trees of solutions”, where every single layer has gradation in line with a single feature as well as a single number. Such a way to build a tree has a number of peculiarities: 
acquiring very simple models that are resilient to relearning 
grading the space using hyperplanes, meaning that in order to calculate the positions within the sheets, you will need to calculate all the gradations, so the order does not matter

regularization. You will need to guarantee the lack of sheets that almost do not have objects, this is why you need to come up with different regularizations in order to penalize such situations 
Cluster learning
There are several ways how gradient boostings on the trees of solutions are paralleled on several servers: 
in line with the signs
in line with the documents
Should we parallel the learning in line with the signs (when different signs are located on several servers), then the overall amount of information that you will need to resend via the net is going to be proportional with the number of documents. Since the number of documents is huge and is constantly rising, we cannot afford this and are paralleling the learning via documents. 
Narrow place in terms of learning all the gradient boosting on the trees of solutions is the selection of the tree’s structure, meaning a number of signs that will build our next tree. The selection is made from the two options:
master-slave mode, when a single leading node and a selection of slaves, each of which is caclculating certain statistics in line with some signs and sends them to the master, which will aggregate them and will select the best sign

the all radius mode, when there is no dedicated master and every single node is calculating all the statistics and aggregates it on itself
Each of those approaches has serious downsides. When it comes to the master-slave mode, master is becoming the net’s narrow place, the all radius mode is accumulating too much traffic, because every node will need to receive a lot of information. For example, XGBoost is working in the all radius mode, so it is not paralleled as well. Both of those issues are resolved within Matrixnet using the following solution: when choosing the next tree, every sign chooses a random node, which is being announced by the virtual masterand all the other slaves are communicating with this node. It is aggregating the necessary information, calculates this sign and delivers the result to the master. 
We are also striving to minimize the traffic in a number of ways. For example, when choosing the best gradation, we choosing a number of candidates on every slave in order to get better signs. We are delivering the necessary info on merely several signs to the virtual masters. Not generally on all we have, but only in line with the TOP ones available.
Matrixnet in ranging
Commercial factors. Trust
In one of my previous articles I have already mentioned that commercial ranging factors will change the face of traditional SEO. This is article was written even before the Yandex’ announcement about cancelling linking via commercial requests, but it was then that the increased influence was felt when it come to the commercial… 
The graph of how over time the dimension of the ranging formula was changing, where the number of iteration – is the number of trees in the model and kilobytes – it is the size of the model. 
As we can see, you will need to constantly boost the learning process along with the application of the model in order to correspond to this kind of growth. So how is the machine learning being used for the search engine? For starters, you will need to gather a learning selection that will consist of a number of pairs (document, request). The assessors will rate very single one of such pairs – just how much the document fits the request. Besides that, this line – document, request, assessment will also have certain signs (requested ones, document related ones, document-request related ones as well). If the sign is related to the request, we simply duplicate is for all the request documents. 
The model is going to learn in line with the acquired learning selection. Learning modes that are being used in Yandex search:
Regression (point-wise mode): Great = 1, Good = 0.8, Bad = 0 => Minimization MSE
• Coupled mode – generating selection of documents pairs with different assessments. The formula is optimizing correct ranging within the pairs.

Optimizing ranging nDCG function (not smooth, will be impossible to make a step in line with the gradient).
Matrixnet tasks
Automatically generating features, smart choice of features

Faster learning (CPU, GPU)

Optimizing rarefied data

Learning through the cluster with unevenly distributed resources

New regulations and error functions
 
Instruments for analyzing the learning formula

Predicting the time of learning and the necessary resources
What does a researcher need to perform in Yandex if he need to educate a formula? He will need to deal with a number of very complex and important tasks: 
Finding the latest version of the algorithm,

Gathering the data for learning in the necessary format

Finding the necessary calculating resources (cluster)
4. Initializing redistributed learning 
Yandex has special infrastructure that will deal with all the tasks and will make the life of the researchers so much more straightforward – it is called Nirvana.
Nirvana principles
Nirvana is a platform for launching arbitrary processes. Its key feature is the fact that all the processes within it are being configure as graphics. 
Should you take the learning Matrixnet formula for example – the user will create the graph using certain blocks, linking them together and launching. Every single block is an operation that is receiving and generating certain data and through them the connection is being performed. The user is initiating the graph with learning and all the history of the launch will be saved, after which he will be able to view any of preceding graphs, clone it, launch it again and get a guaranteed identical result. 
Nirvana is specifically paying attention to reproducibility. Any experiments with machine learning within Yandex are reproducible. Besides that, Nirvana also allows to view the experiment history of all the other users in Yandex, clone their experiment, somehow change and relaunch them. For instance, you can take learning of our search production formula, view the graph, clone, change some sort of parameters and get your very own search formula, which is going to be a whole lot better than the existing one. 
Nirvana features plenty of operations – there are about 10 thousand of those right now. Various utilities are present along with a convenient search of operations. If the necessary operation cannot be found, there will be a possibility of creating your personal operation. Nirvana supports the so-called visual programming, which significantly eases up the creation of functions as well as composite operations. And, of course, Nirvana also features the most commonly used machine learning algorithms in yandex – both Matrixnet and neuronet. It is very simple to educate your own neuronet within Nirvana, there is nothing complex about it and you will not need any additional expertise. 
Nirvana is a pretty recent system, its alpha version was launched in 2015. But it already has plenty of users – there are more than 2 000 of them (one third of entire Yandex), every week about 500 people are using Nirvana and they are launching about 50 thousand graphs on a weekly basis.
0 notes
russianseo · 7 years
Text
The foreign websites’ share of Ya.Metrica traffic surpassed the volume of “domestic” market
Tumblr media
By the end of 2016, the overall share of Yandex.Metrica traffic from foreign websites exceeded the share of the traffic on the “domestic” market (Russia, CIS and Turkey together). This was the topic of a phone discussion with the investors regarding financial results of 2016 and it was presented by Yandex technical director Mikhail Parahin. 
According to Yandex, the growth of popularity of Yandex.Metrica abroad is quite natural, since a larger part of advertising budgets is being redirected to the net and the need for top quality analytical services is becoming more apparent. And by using Yandex.Metrica, one can acquire any necessary reports regarding visitors’ statistics of the website, effectiveness of the key indicators, visitors’ behavior and other characteristics and analyze as well as compare the behavior of different audience segments. Besides that, Yandex.Metrica is also allowing to acquire complete and raw source data for solving complex analytical problems. 
“Our Yandex.Metrica service right now is the second in the world when it comes to analytical services in terms of traffic volumes. In the beginning, we developed it for our personal needs, but we have managed to develop it into a genuinely global product that is right now processing information from 1.5 billion cookie files, which is allowing us to analyze traffic from more than half a billion devices from all over the planet” – the technical Yandex director is stating. 
Allow us to remind you that by the end of 2016 Yandex announced the release of a new DBMS with open source code ClickHouse that allows to perform analytical inquiries regarding updated data in real time. Yandex.Metrica is using the possibilities of that system to store, process and control meta-data in real time as well. This DBMS optimizes all the process of data processing and allows to acquire instant results. 
The AppMetrica service (Ya.Metrcia for applications) that was relaunched by Yandex back in August 2015, allows the app developers to track installations from any sources, analyze users’ behavior and effectiveness of various advertising channels. As for today, AppMetrica is processing traffic from about 180 million mobile devices all over the world.
1 note · View note
russianseo · 7 years
Text
What awaits the SEO industry after the release of Minusinsk
Tumblr media
After the release of Minusinsk a new text filter is going to appear and it will hit both the 2 dollars copywriting and the websites that were made for Miralinsk along with everyone, who is preoccupied with filling web pages with useless content for the scope of expanding the semantics. Should we try and analyze the current situation on the PR market in an objective manner and namely – the results of the filters’ actions that Yandex surprised us with last year (AGS and Minusinsk), it becomes quite apparent that something crucial is missing for the “bigger picture”.
Let us avoid all the controversy regarding what “the bigger picture” is by agreeing that each has its own notion. For the optimizers it is the possibility to earn money in a quick and easy manner, using an already established system and for Yandex – it is to ruin this system and to build a new one. Nowadays, despite a pretty harsh pressing that is being cast upon the optimizers by Yandex (which resulted in two thirds of the linking business to be utterly annihilated throughout the last couple of years) – the system was not changed in a fundamental manner. The existing system turn out to be very robust – it is filled with both horizontal and vertical relationships on all levels. The SEO market is linked to the issuance relevance and it does depend on its infrastructure. In many ways, commercial issuance is relevant namely due to the efforts of lazy optimizers and the annihilation of the link selling industry already led to a temporary and yet quite substantial decrease in quality of commercial issuance.
Yes, the modern market is still far from perfect, but it is changing. The recently released filters are basically two parts of the same medal – the new AGS is punishing websites that are selling SEO links, while Minusinsk is punishing the websites that are buying those links. The day when unprofessional optimizers are going to lose control over the linking factor altogether is quite close, but what is coming next?
Well, in that case, the medal is going to be transformed into a pyramid and its third part will be the new text filter. A sort of Minusinsk for articles or, perhaps, it is going to be named Baku – after the city where “Iskra” magazine was being secretly produced – it does not really matter. What matters is the fact that it will hit the 2 dollars copywriting and all the websites that were made for Miralinsk as well as all those that were preoccupied with filling various websites with useless content for the sake of expanding the semantics. The filter is going to cut all the content projects that are not offering ample value and its release is pretty much imminent.  
It seems that the idea of pessimizing the number of pages for irrelevance is quite apparent as well, but why does Yandex hesitate when it comes to fulfilling it? Well, for the same reason that AGS was initially launched and why Minusinsk was launched afterwards – in order to ensure security Yandex is consistently including new filters. And right now Yandex is preoccupied with trying to deal with the situation that is related to the inclusion of Minuisnsk, since the changes within ecosystem due to decrease of linking market resulted in changes within the yandex algorithm.
The overall chaos that we are currently witnessing within the issuance is actually easily explained by the updates of the bigdata entering Matrixnet – it provoked an unpredictable distortion of normalities within complex metrics of linking quality. The distortion of those metrics was followed by distortions in the website textual analysis – their relevance of anchors and related keys was closely linked to the incoming and outgoing. Due to such global changes Yandex had to initially include the manual regulation mechanism into the most important topics and then they activated the automatic mechanism as well, which is caching the setting ofmetric ranging normalities and then, after a month or two, it is comparing it to the current changes, captures normalities bugs and corrects them – in other words, it is correcting the algorithm itself. We see those processes as a kind of randomization – quite solid periodic changes in traffic and jumps in positions, which are pretty much impossible to analyze within the current conditions. The basics of such a useless analysis are the attempts of capturing the “many-handed bandit”.
The described processes are going to continue and via the “amplitude” of those swings it will be possible to judge the overall “health condition” of the algorithm that is trying to compensate the empowered pressure of the Minusinsk algorithm. When the stability of the search engine will stop being threatened by anything, and the overall peak of resistance of the surrounded ecosystem will pass, the time for the new filter will come – the article related brethren of Minusinsk. But not before. Minusinsk, whose mission is to eliminate the linking marketing industry, is operating quite successfully – Sapa and similar resources were damaged quite severely. When it comes to article links, it is not as straightforward, since SEO links within articles are hardly ever established by Yandex. If, for example, you are going to be using the measuring system of linking weight, it will become obvious that the weight of links with articles is generally much bigger than the weight of the usual sapalinks and the number of SEO links within the topical selection of Sapa, which was checked by the SEO links filter, in most cases reaches 90% against 10% in the articles. So it is only natural that the SEO marketing links industry is quickly moving towards the articles.
It is also apparent that the future fight with useless links will not be going within the notions of SEO link/ none-SEO link, since the majority of links with articles are not being located within SEO links (and are classified as “poor quality advertising, conceptually not being subject to prohibition, since the existence of advertising links is completely natural), but within the notion filed SEO article/none-SEO article. It implies that the overall quality of the article itself is going to fall under analysis, both within the website and within the limits of the topic.
It is a known fact that there are two types of document quality quorums and they are determining whether the given document is going to indexed/left within the index or not. If the same quality documents are prevailing online, it will be difficult for it to pass the quorum of usefulness among all the already existing ones. And if the website where such a document is being posted has bigger trust and ensures certain internal traffic via that document (meaning that is passed the usefulness quorum for the given website), it is a great basis to leave it within the index itself.
A certain index rotation of documents that exist within it is quite within the characteristics for larger web pages. The more commonly known metric of a document’s uselessness within a website is the sematic similarity or duplication. Optimizers are well aware that duplications could lead to poor website ranging, but the upcoming filters will not be limited by this metric. It is going to pessimize for the number of pages that are taking part within the index rotation. This means that if right now the pages that are not passing the usefulness quorum (for instance, it is not the right season for the product, which is described on the give page), are not resulting in pessimization, in the future even an extensive quantity of those on a page due to natural causes are going to lead to sanctions for the new filter. Since IT IS PROHIBITED for anyone to have a huge assortment of products that could mostly be attributed to the larger resellers.
And it is going to be true for the old commercial websites as well, so I recommend you to get rid of those unpopular pages or prohibit them via robotx.txt, letting the search engine know that these pages only have archaic value and will not be part of the ranging issuance.
This is mostly linked to ecommerce and as for the relevance of the pages that are attributed to the websites made for Miralinks for which the usefulness quorum is not being controlled for the vast majority of posted articles – right now they are simply being thrown out of the index, but the new filter is going to punish for it in the future.
The new filter, much like the Minusinsk, will be update and will have two factors – in line with the quantity of useless pages and in line with their quality. Remember what I wrote in my last article about the allowed thresholds of AGS and Minusinsk that are necessary for the filter to work, the filter’s thresholds are going to be increased with every new iteration and will detroy bad articles in the first place.
Prophylaxis methods:
Not to post any links on bad websites in articles that are not being read by anyone and will never be read
Not to order and purchase and post such articles
Create interesting, live satellites that will surely be read by interested individuals
0 notes
russianseo · 7 years
Text
Top things that are important to the site and that Yandex does not allow to resolve quickly
Tumblr media
There are dozens of the most common tasks that optimizers cannot resolve quickly in Yandex.
All because of the speed of processing and receiving data by search engines. In this sense, Google has a priority since this service includes updates and scans faster than most sites. Yandex, on the other hand, is lagging behind, so any changes should be expected no earlier than a month. Even after achieving a result, it is impossible to say with certainty that these changes have led to it.
Things that take time are as follows:
- Indexing of content or a new site. If you want the pages of the site to begin to appear in Yandex search, first the search engine should be informed by external links from other sites, and then you will need to wait for crawling, and then for update. One thing is certain - the first update of the site does not fall into the index. Therefore, in preparation adding seasonal content you should take into account this nuance and prepare content in advance.
- Removal of the pages or the entire domain from the index. Deleting a page from a search is no less problematic than adding it. Although Yandex provides a removal form, in most cases it is necessary to wait for a long time after applying for it. In this regard, it is best to avoid contact with subdomains, doubles or uninformative pages in the index.
- To re-index the site after it changed its structure. Changing the structure includes the addition of new addresses, and removal of the old ones. Since these are two tasks, you will have to wait twice as long. In this case the rate of the results will depend not only on the visit of the quickbot, but also on how often updates will be performed, as well as the date of the saved copies update. In case of bulk resources with extensive levels of nesting, outdated pages will take months to save.
- Changing the primary mirror. It is best to think everything through before you replace the main mirror of the site. The new panel provides a tool for webmasters "Moving site", but the rate of change of the primary mirror, unfortunately, will not be affected by it.
- After the HTTPS transition protocol change in issuance. Most experienced experts and companies are advising to migrate to sites protected by the HTTPS-protocol. This issue will be resolved, at least within a month, so it is best to choose the most appropriate time for this.
- Accounting and indexing of external links. Links are even more difficult than the content. This is one of the reasons why the rental links for SEO are not very effective in terms of results and money. Perhaps, Yandex does this deliberately, so that SEOs do not perform prompt experiments and could not affect the outcome with external links.
- Optimization of content and titles. One of the main tasks of a SEO expert is optimization of landing pages. After correcting the snippet you will have to wait for about three weeks, but then you should make adjustments and wait for a month or longer. The same pattern applies for the titles. Such uncertainty is a considerable downside of delays by Yandex.
- Optimization of relink and resource usability. The search engine must first re-index the content, consider new signals and then it need to updates the data, stored copies of documents and reference graphs.
- Removing Minusinsk. Removing the filter should follow after removing the purchased links. Besides the fact that Yandex will have to see the changes on other sites, there is also the need to update the algorithm, which is happening not more than twice a month. Sometimes sites get rid of Minusinsk in one month, that is an excellent result, but most of the resources are getting rid of it for an unreasonably long period of time.
- Removal of other filters. Getting rid of the filter sanctions for hidden text, cheat, etc. It can last for months.
0 notes
russianseo · 7 years
Text
New PHP class to work with webmaster API
Tumblr media
Recently a PHP class that implements all Yandex.Webmaster API interfaces appeared for public access. This class will help to migrate to the new API quicker to those who already have experience with the old one or simply want  to integrate the desired API functions in your own admin services. 

You can find it at https://github.com/yandex/webmaster.api . Keep in mind that you will need curl and json extensions in order for it to work correctly. 
Inside the class is a webmaster_api.class.php file, which can be used for work.

An example of how to work with the API (example folder).
You'll also find the example folder which contains the functioning code, lay it out and you will be able to assess the practical example of API work. At this time, it features nearly complete functionality, which is available through the API.

To see an example of how it works, you will need to do two things:

- Create a new application for "Yandex.Webmaster" project https://oauth.yandex.ru/client/new . When creating the application, be sure to check both of the available checkboxes related to access rights. The address of where the sample code is located select in the callback url field.
- The example/config.example.php file you will need to copy in example/config.php, do not forget to fill in the variables $client_id и $client_secret, having entered the data which you received after the previous paragraph.
How the class itself functions:

- It is necessary to pass the access token during the initialization of the class. To get the tokens in lie with the code a static method webmasterApi::getAccessToken was used.
- First instruments of the class are an interface to work with the API. It’s get, post, delete and additional methods used inside. All methods work in line with the usual principle – you should indicate the API handle, to which you wish to connect to, as well as an array of variables that we have to pass. We should make a request to the API, and the result appears. If something is wrong, the API will return the object with error_code и error_message fields.
- It is followed by the method, which is responsible for the preparation of certain data through the API. There is no point to list them in details - they are commented directly in the code.
In general, it is all you need to know - anyone who even slightly knows php, will be able to handle things quickly.
Note that this code was originally written for internal API testing. Therefore, if you have found an error or malfunctioning, it is best to try to fix everything yourself. Do not be lazy and send in comments about the fixes into the repository - it's easy and can be very useful to others.
0 notes
russianseo · 7 years
Text
PHP class to work with the webmaster’s API released for public access
Tumblr media
PHP class that implements all Yandex.Webmaster API interfaces, was released for public access. PHP class was created by the head of Yandex.Webmaster service Dmitry Popov and serves to help boost the transition to the new API for all those who used to work with the old cabinet.

PHP class is located at https://github.com/yandex/webmaster.api. Inside the class is the webmaster_api.class.php file, which is available for use. It should be noted that at present in order for the class to operate correctly, it requires curl and json formats.

As you know, Yandex.Webmaster service API 3.0 was released in early September this year. Although the new version supports all the old software interface functions, it is not compatible with previous versions. Hence, all API Webmasters users are strongly encouraged to move to the new version. To ease the transition, two API response formats are enabled: XML, used in the old version, and more productive JSON.
0 notes
russianseo · 7 years
Text
Ways to explore the site that was previously promoted by another optimizer
Tumblr media
In today's environment to undertake a resource that was previously promoted by another optimizer is extremely risky. Few are practicing delegation of work when changing SEO specialist, and that is why the transition from one optimizer to another is often accompanied by a change in the reference mass, large-scale substitution of texts, meta tags, headings, and so forth. In order to understand the risks of promotion of the resource, which had been previously promoted by other optimizers, one should request from the client access to all the statistics and find out the details of achieved progress. However, not all customers are in a hurry to share access and most of them are not able to provide objective information.
There is a list of things you can check to search and analyze the sanction history of the website, without getting access to statitstics:

- Age of the resource in the search engines is not defined by the date of registration of the domain, but by the moment of getting the site indexed. However, often they are the same. In order not to confuse a new site with the old one, the information from the Whois should be compared with the date of the first indexing of the resource’s home page.
This is determined as follows:
Use the url to find the main page in the index

Add the this passage in the url address - &how=tm
- Check the resource for Yandex sanctions
AGS
Diagnosis is made by zeroing TCI using one of the methods: 
- By Yandex.Informer - shows the value "0" 
- Through the TCI page http://yaca.yandex.ru/yca/cy/ch/<website address> we enter the message "Citation Index (TCI) of the resource - is not defined"
Filter for the internal respam
Unfortunately, to determine the filter accurately you will need to get in touch with Yandex tech support, but it is possible to diagnose it in advance, based on a popular symptom - replace the relevant page.
Diagnostic method:


You should choose MF query where the website is located outside the top.


It is necessary to understand what demanded page will be ranked in general search.


It is necessary to check which page is ranked on the website search.

If in the overall issuance does not have the page that the website search considered more relevant, in most cases the page is filtered for respam.
Affiliations
Check whether the client owns other sites, even if the client did not speak about it, it will not hurt:

To track which sites can be found under the same IP;


To consider the search issuance by key words, company name;

- Conduct a search, using contact details provided on the site;


See pages mentioning the name of the company, as there may be several sites;


If the company has groups in social networks, find out which sites are promoted there.
- To consider the site in conjunction with Google automatic filters (Panda, Penguin and so forth.). This is possible with the help of a free tool Website Penalty Indicator. This service updates will compare algorithms with changes in Google traffic.
- Analyze site history using the services:

Xtool


Semrush


Webmoney Advisor


Alexa
- The presence of a significant customer linking profile was considered a huge advantage until Minusinsk appeared. Nowadays, it is rather negative, capable of drowning the website in the search issuance, even with all the efforts put in remaining on top.
- History of domain owners and changed topics. 
Way Back Machine service will help you understand how the client's domain content changed over time.
If the history of the prospective client site has issues that are alarming, it does not mean that we should refuse to work with him. Access to statistics is useful for a more complete analysis, and can help yoy get to the bottom. It makes no sense to give empty promises, but if you do the best you can, everyone will be really happy.
0 notes
russianseo · 7 years
Text
How does Yandex distinguish natural links from SEO links
Tumblr media
Special Qualifier allows Yandex to recognize SEO links, and therefore changing the links in articles on anchorless ones is useless. Bad link does not suddenly become good, because Yandex Qualifier does not react to it.

The principle of operation of the Qualifier.
There is a number of links, which are most certainly SEO links, as well as a certain number of ordinary links that appeared on the network in a natural way. This data is forming a training set of SEO links Qualifier in Yandex.
After that, all the data is transmitted to Matriksnet, and the algorithm can accurately determine whether it is a normal link or SEO Link. Matriksnet is based on the reference count, which means much greater efficiency in the evaluation of links than any Yandex webmaster or employees. 
Therefore, questions like "But is this link a SEO Link, which I did not buy, but received from partners?" are pretty much meaningless.

The completeness of the algorithm currently stands at 99%, its accuracy at 94%
0 notes