#like at least a function to show me my actual score right..?? combined with accuracy and note count???
Explore tagged Tumblr posts
Text
accidentally made myself angry thinking about gacha rhythm scoring dynamics. cause why can i only name one mobile idol rhythm game where it gives you a technical score separately to your gacha slop score after playing a show
#delete later. sorry#i struggle with pjsk primarily bc of how important scoring is to getting money to keep getting better scores via gacha cards it's really.#...stupid as fuck#like at least a function to show me my actual score right..?? combined with accuracy and note count???#why can you get an AP but it's still a C because you have a default card group#genuinely pisses me off#mobile game i can name is d4dj and if we're talking arcade there's ongeki but that's like. it#i considered ditching pjsk for d4dj but i don't know how long it would take for me to get a hang of the technical gimmicks
3 notes
·
View notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Ranking the 6 Most Accurate Keyword Research Tools
Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little background
We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation Coefficient
Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score
Key
.70 or higher
Very strong positive relationship
.40 to +.69
Strong positive relationship
.30 to +.39
Moderate positive relationship
.20 to +.29
Weak positive relationship
.01 to +.19
No or negligible relationship
0
No relationship [zero correlation]
-.01 to -.19
No or negligible relationship
-.20 to -.29
Weak negative relationship
-.30 to -.39
Moderate negative relationship
-.40 to -.69
Strong negative relationship
-.70 or higher
Very strong negative relationship
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation Coefficient
Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: Moz
Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability
PCC
0.405
P-val
.01 (P<0.05)
Relationship
Strong
% Keywords Matched
80.00%
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrush
Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability
PCC
0.316
P-val
.03 (P<0.05)
Relationship
Moderate
% Keywords Matched
100%
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner Tool
And the resulting scores are as follows:
Tool
PCC Test
Moz
10
SpyFu
9.8
SEMrush
8.8
KW Finder
8.7
Ahrefs
7.7
KPT
1.1
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation Coefficient
Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed)
PCC
Difference (+/-)
SpyFu
0.527
0.122
SEMrush
0.515
0.150
Moz
0.514
0.101
Ahrefs
0.478
0.162
KWFinder
0.470
0.110
Keyword Planner Tool
0.189
0.144
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool
PCC Test
Adjusted PCC
Total
SpyFu
9.8
10
19.8
Moz
10
9.7
19.7
SEMrush
8.8
9.8
18.6
KW Finder
8.7
8.9
17.6
AHREFs
7.7
9.1
16.8
KPT
1.1
3.6
4.7
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: Resampling
Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling
% Guessed correctly
Moz
62.2%
Ahrefs
61.2%
SEMrush
60.3%
Keyword Finder
58.9%
SpyFu
54.3%
KPT
45.9%
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool
PCC Test
Adjusted PCC
Resampling
Total
Moz
10
9.7
10
29.7
SEMrush
8.8
9.8
8.4
27
Ahrefs
7.7
9.1
9.2
26
KW Finder
8.7
8.9
7.3
24.9
SpyFu
9.8
10
3.5
23.3
KPT
1.1
3.6
-.4
.7
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword Matching
A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
You have to use another tool to get the data, which devalues the entire point of using the original tool.
You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool
Match Rate
Penalty
KW Finder
100%
0
Ahrefs
100%
0
Moz
100%
0
SEMrush
92%
-.8
Keyword Planner Tool
88%
-1.2
SpyFu
80%
-2
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
0 notes
Text
Garmin Approach S60 Golf GPS Watch Reviews
After a lifetime of aiming and holding, I’ve finally decided to hang up my laser rangefinder. I have been scouring the market this past week for golf GPS watches. I wanted to experience the best tech out there and the fuss of lightweight gadgets. Garmin has the monopoly on golf wearables and their latest toy, the Garmin Approach S60 is ripe for testing.
So I took a run at this golf GPS watch to see how it holds against its price.
I put its multi-talented fitness-tracking spec to the test throughout my day. And I am finally ready to compare it to one of my favorites, Garmin S6.
PROS
Complete fitness tracker
Accurate yardages including hazards and doglegs
Touchscreen color display with full colored course maps
PinPointer Feature
QuickFit band for style changes
Sleek and lightweight for the features
Phone notifications can be enabled
40,000+ pre-loaded courses
CONS
Battery life could be better
The software of some pieces have been glitch
Expensive
Garmin Approach S60 Golf GPS Watch Reviews
Garmin has come to a point where executing golf features is like 2nd nature. So they have combined their golf prowess with activity tracking capabilities. The Garmin Approach S60 golf GPS watch has been designed as a complete fitness solution with a high focus on golf. This is a premium watch and the looks match up to the price. The specs of this watch are impressive. I wondered with S60 delivered on all it promised in a way that pleased an old-timer.
I recommend the Garmin S60 for tech-savvy golfers who also take up other workouts during the day like running, walking, swimming etc.. Those who want minimum distractions to their game can go for the S6 or the S2. S60’s gilded features are tempting and take up a few button strokes. There might be a small learning curve to understand and use all its features. But if you are quick with the user interface, you will enjoy using the myriad functions.
Features & Benefits of Garmin Approach s60
A. Features- Slope-adjusted distances:
Going by the term ‘Playslike Distance’, this feature accounts for the elevation when playing an uphill or downhill shot. So I don’t have to adjust the actual distance for the elevation changes. This is the same high-end feature that makes Garmin G8 handheld stand apart.
- PinPointer:
This is another great feature that mostly laser rangefinders boast of. PinPointer points you in the direction of the green when you in the blind. I could swing in the right direction even from dense woods and deep sand traps.
- Training Aid:
The Garmin Approach S60 golf GPS watch can also double as a valuable coach. It has the Swing Tempo training aid which helped me tame my tempo to the golden 3:1 ratio between backswing and downswing. It even allowed me to set my own tempo ratio and ensure all shots were played according to that speed.
If you own the TruSwing sensor, you can pair it with the S60 to measure your swing statistics. Once the TruSwing device is mounted on your club and you swing, the swing metrics appear on the S60 automatically. I’ve been watching some tech-savvy students make use of the metrics and scale their progress over time.
- AutoShot Feature:
The Garmin S60 can automatically detect shots and start measuring shot distance. I use the shot distances averages to select the right golf clubs. I found it profoundly useful that I didn’t have to manually kickstart the measurement.
- Statistics:
The greatest advantage of Garmin Approach S60 is how it gives instant access to the statistics of your gameplay. Once I finished playing a hole, I could access the shot-by-shot information for that hole with just three button presses. I could also view the score and other statistics during any point of the round. Round summary collates all the raw data collected from my game into statistics that help me play better next time.
- Scoring:
You get to choose the scoring method you want to use in Garmin S60 golf GPS watch. You can select the popular Stableford method for scoring. Or you can enable the Handicap scoring based on the Local Handicap and Handicap index. Scoring function takes multiple buttons strokes. But I like having at least the basic scoring function.
- Connection Features:
The S60 can connect to your smartphone via Bluetooth and show notifications received. It automatically shares information with the Garmin Connect account. The Garmin Connect app gives you statistical analysis of your game. It maintains all my activity data for future reference. So I can view a previous game and figure out how I played a particular round well. It really helps me straighten out some difficult shots.
It has the Find my Phone and Find my watch feature. These were life-savers for me as I am prone to leaving my devices around the house. This way, if they are in range they will detect each other. I am yet to properly explore the Connect IQ, but Garmin claims that it extends the device features with new widgets, apps etc.
- Activity Tracking:
The Garmin S60 golf GPS watch comes pre-loaded with apps to track outdoor activities like running, biking and swimming, each with its own custom module. You can also use it for training indoor without the GPS. For example, it measured my speed, distance, and cadence while running.
The S60 automatically creates a goal number of steps you must take based on previous recordings. It has the Move Alert which bugs me to move around if I’ve been stationary for too long. It also gives me sleep statistics if I wear it to the sack. It’s freaky but very helpful for someone with Sleep Apnea or such issues.
- Yardages:
Apart from the distance to the center, front and back of the greens, we get distances to hazards, layups, and doglegs.
- Design:
The Garmin Approach S60 brought back memories of the Fenix. Its suave design is lightweight with a round dial. The dial is just as big as a normal sports watch. You can choose between leather and silicone watch bands. The leather band is premium. But I prefer the silicon band because it repels sweat and doesn’t choke your wrist in the summer like leather. I can easily replace the bands to my preference because of the QuickFit system. The bands snap off without any tools.
View On Amazon
The S60 is water-resistant up to 50m under. It holds 1GB worth of internal memory. It weighs 1.8 ounces total.
- Display:
The Garmin Approach S60 has a touchscreen 1.2” dial. I found the S6’s screen impressive. This one simply blew my mind with its ultra-clear color display with 240 x 240-pixel resolution against S6’s 180 x180. The display is easily readable, come hail or rain.
- Views:
In the Hole View, the Garmin Approach S60 automatically updated the holes for me as I finished one hole. I can also drag the hole indicator to change the hole in case the automated function is stuck. It has the Green View which allows me to move the Pin to a location of my choice. All the distances are updated accordingly.
- Battery:
The battery of Garmin Approach S60 lasts through 10 hours in Golfing mode or 10 days in watch mode. Since it’s a golf watch and fitness tracker, I expected a little more battery life. What if I don’t get time to charge it between activities?
B. Course Detail
Garmin Approach S60 golf App boasts of covering 40,000 golf courses worldwide. It is the largest database we have seen lately. There’s no subscription fee for updating the database either. I just downloaded the Garmin Express App for updating the local course which had changed slightly in hole arrangement.
The Garmin Golf app also helps golfers compete with each other from any of these 40,000 courses. They even have a weekly leaderboard that I could join. It definitely gave me a big kick to play better and practice harder.
C. Setup/ Syncing
The Setup was a breeze. I charged it off the AC adapter and was good to go.
If I were a more cautious user, I would download the Garmin Express app for the PC and the Garmin Connect app for my cellphone. I ran a course update to make sure the courses were all latest before heading out. These apps also help with software updates and a short device registration process.
I know some users who have had trouble syncing with the Garmin App repeatedly. So I wouldn’t rate the S60 as perfect just yet. The software needs tweaking to work well in all the pieces. Hopefully, the Quality Management team at Garmin will sort out this issue. S60 is an expensive watch. Nobody wants to be mired in connection issues especially with Garmin Connect which records and shows the Statistics.
D. Ease of Use
As I feared, the Garmin Approach S60 golf GPS watch has one too many functions to peruse. And this could be distracting. Most of the options are based on the control menu. I can add, remove or reorder the widgets without any issues. This gives me the freedom to arrange the control menu to my preference.
I could zoom in and out of the Hazard view and point to anywhere on the course and find the distance to the point. The screen is moderately large enough for my fingers.
E. Accuracy
The Garmin S60 has improved over the previous golf GPS watches in terms of accuracy. Dusting off my trusty old rangefinder I found the distances from both within 2 yards of each other. The feature of moving the Flag around also adds to the accuracy of the device.
F. Social Proof
To gather the user opinions, I went through reviews of Garmin S60 users. The vote is split on this Garmin golf GPS watch. Customers love that the options for customization on this watch turn it into a very personal device. There are a number of nifty features that I haven’t even discovered yet. The all-around fitness tracking is useful for someone with multiple athletic interests as many people are these days. Garmin gets brownie points from the users for no subscription fee and the special PlaysLike distance feature. Although it debuted as an expensive watch, it has come to a more agreeable price for its feature set. So I expect the popularity to rise.
While a number of people reveled in the glory of all its features, there were half as many miffed by the watch’s apparent refusal to lock on GPS satellites sometimes. That and the occasional Syncing issue with Garmin S60 aside, I was suitably enamored by the S60.
Conclusion
I was on the lookout for some golf GPS watches that returns high value for money. The Garmin S60 is perfectly positioned at its current price to do so. It’s a cutting edge creation with a lot of customizable features for the technologically-forward golfer. It goes with the current generation’s inclination towards all fitness-related activities. And it has managed to do so without adding to the size of the watch. There are some software issues as with every smartwatch of this generation. Fortunately, the ability to push software updates will help the consumers warm up to S60 more.
0 notes