Tumgik
#reading lots on old web and the development of algorithms and the death of simple sites and i want to tear my hair out
memser · 2 years
Text
sometimes (a lot recently) i get so looped into being online i want to buy a blackberry and an old computer and never visit current sites again. no more twitter no more tiktok no counting views/likes/shares/followers/growth. just posting my brain spew onto a blog and being where my friends are. ideal
18 notes · View notes
laurelkrugerr · 4 years
Text
Readability Algorithms Should Be Tools, Not Targets
About The Author
Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable … More about Frederick …
Readability programs may seem like a godsend, but the worst thing writers can do is write to please them above all others. Finding your voice is hard enough without also trying to sound like everyone else.
The web is awash with words. They’re everywhere. On websites, in emails, advertisements, tweets, pop-ups, you name it. More people are publishing more copy than at any point in history. That means a lot of information, and a lot of competition.
In recent years a slew of ‘readability’ programs have appeared to help us tidy up the things we write. (Grammarly, Readable, and Yoast are just a handful that come to mind.) Used everywhere from newsrooms to browser plugins, these systems offer automated feedback on how writing can be clearer, neater, and less contrived. Sounds good right? Well, up to a point.
As with most things, there’s an xkcd comic for this. (Large preview)
The concept of ‘readability’ is nothing new. For decades researchers have analyzed factors like sentence length, syllable count, and word complexity in order to ‘measure’ language. Indeed, many of today’s programs incorporate decades-old formulas into their scoring systems.
The Flesch-Kincaid system, for example, is a widely used measure. Created by Rudolf Flesch in 1975, it assigns writing a US grade level. The Gunning fog index serves a similar purpose, and there are plenty more where they came from. We sure do love converting things into metrics.
It’s no mystery why formulas like this are (quite rightly) popular. They help keep language simple. They catch silly mistakes, correct poor grammar, and do a serviceable job of ‘proofreading’ in a pinch. Using them isn’t a problem; unquestioning devotion to their scores, however, is.
No A-Coding For Bad Taste
I want to tread carefully here because I have a lot of time for readability algorithms and the qualities they tend to support — clarity, accessibility, and open communication. I use them myself. They should be used, just not unquestioningly. A good algorithm is a useful tool in the writer’s proverbial toolbox, but it’s not a magic wand. Relying on one too heavily can lead to clunkier writing, short-sightedness, and, worst of all, a total uniformity of online voices.
One of the beauties of the internet is how it melts national borders, creating a fluid space for different cultures and voices to interact in. Readability historically targets academic and professional writing. The Flesch-Kincaid test was originally developed for US Navy technical manuals, for example. Most developers can appreciate the value of clear documentation, but it’s worth remembering that in the world of writing not everything should sound like US Navy technical manuals. There are nuances to different topics, languages, and cultures that monosyllabic American English can’t always capture.
Deference to these algorithms can take writers to absurd lengths. Plain English is one thing, but unquestioning obedience is another. I’ve seen a good few sentences butchered into strings of words that tick readability boxes like ‘write in short sentences’ and ‘use monosyllabic words wherever possible’, but border on nonsensical to the human eye. It’s a near-impossible thing to quantify, but it has been a recurring phenomenon in my own work, and having spoken with other copywriters and journalists I know it’s not just my rampant paranoia at work.
Let’s look at the limitations of these tools. When faced with some of the greatest writers of all time — authors, journalists, copywriters, speech writers — what’s the verdict? How do the masters manage?
A Tale of Two Cities by Charles Dickens. The opening chapter receives a grade of E from Readable.
George Orwell’s essay ‘Politics and the English Language’, which bemoans how unclear language hides truth rather than expresses it. He gets a grade of D. Talk about having egg on your face!
The beginning of The Old Man and the Sea by Ernest Hemingway does tolerably well in the Hemingway Editor, though you’d have to edit a lot of it down to appease it completely.
A personal favorite that came up here was Ernie Pyle, one of the great war correspondents. His daily columns from the front lines during World War II were published in hundreds of newspapers nationwide. One column, ‘The Death of Captain Waskow’, is widely regarded as a high watermark of war reporting. It receives a grade of B from Readable, which notes the writing is a tad ‘impersonal.’ Have a read and decide for yourself.
Impersonal war correspondent Ernie Pyle. Credit: Indiana University. (Large preview)
Not all copywriting is literary of course, but enjoyable writing doesn’t always have to please readability algorithms. Shoehorning full stops into the middle of perfectly good sentences doesn’t make you Ernest Hemingway. I’m an expert in not being as good as Ernest Hemingway, so you can trust me on that.
Putting Readability Into Context
None of this is supposed to be a ‘gotcha’ for readability algorithms. They provide a quick, easy way to identify long or complex sentences. Sometimes those sentences need editing down and sometimes they’re just fine the way they are. That’s at the author’s discretion, but algorithms speed up the process.
Alternatively, if you’re trying to cut down on fluffy adverbs like ‘very’ you can do a lot worse than turning to the cold, hard feedback of a computer. Readability programs catch plenty of things we might miss, and there are plenty of examples of great writing that would receive suitably great scores when put through the systems listed above. They are useful tools; they’re just not infallible.
Algorithms can only understand topics within the confines of their system. They know what the rules are and how to follow them. Intuition, personal experience, and a healthy desire to break the rules remain human specialties. You can’t program those, not yet anyway. Things aren’t the done thing until they are, after all.
It’s a fine line between thinking your writing has to be clear, and thinking your readers are stupid. You stop seeing the woods for the trees. Every time I hear that the ‘ideal’ article length is X words regardless of the topic or audience, or that certain words should always be used because they improve CTR by 0.06%, I want to gauge my eyes out. Readability algorithms can make sloppy writing competent, but they can’t make good writing great.
Remember, when all is said and done, copy is written for people. From an SEO Company perspective, Google itself has made it clear in the past that readability should match your target audience. If you’re targeting a mass audience that needs information in layman’s terms, great, do that. If you produce specialized content for experts in a certain field then being more specialized is perfectly appropriate.
As Readable has itself explored, readability can be a kind of public good. Easy to read newspapers spread information better than obtuse ones do. Textbooks written for specific age groups teach better than highly technical ones do. In other words, understand the context you are writing in. Just remember:
“When a measure becomes a target, it ceases to be a good measure.”
— Goodhart’s Law
Find Your Voice
I have no beef with readability algorithms. My problem is with the laziness they can enable, the thoughtlessness. Rushing out a draft and running it through a readability tool is not going to improve your writing. As with any skill worth developing you have to be willing to put the hours in. That means going a step or two beyond blindly appeasing algorithms.
Not everyone has a lUXury of a great editor, but when you work with one, make full use of the opportunity. Pay attention to their suggestions, ask yourself why they made them. Ask questions, identify recurring problems in your writing and work to address them.
Analyse how the algorithms themselves work. If you’re going to use readability systems they should be supplemental to a genuine search for your own voice. Know how the things calculate scores, what formulas they’re drawing from. Learn the rules yourself. By doing so you earn the knowledge required to break them.
In his aforementioned essay George Orwell offers up his own approach to rules:
Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
Never use a long word where a short one will do.
If it is possible to cut a word out, always cut it out.
Never use the passive where you can use the active.
Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
Break any of these rules sooner than say anything outright barbarous.
These are founded on solid principles applicable to the web. Where did those principles come from? Not computers, that’s for sure.
Real editors and honest self-reflection do a lot more for your writing ability long term than obeying algorithms does. It all feeds back into your communication, which is an essential skill whether you’re a copywriter, a developer, or a manager. Empathy for other people’s work improves your own.
There is another essential thing good writers do: they read. No algorithm can paper over the cracks of an unengaged mind. Whatever your interests are I guarantee there are people out there writing about it beautifully. Find them and read their work, and find the bad writing too. That can be just as educational.
If you’re so inclined, you may even decide to get all meta about it and read about writing. If you’re not sure where to start, here are a handful of suggestions to get the ball rolling:
Also keep in mind that readability is not just a question of words. Design is also essential. Layout, visuals, and typography can have just as much impact on readability as the text itself. Think about how copy relates to the content around it or the device it’s being read on. Study advertising and newspapers and branding. On the other side of that sprawling jungle is your voice, and that’s the most valuable thing of all.
To reiterate one last time, readability algorithms are handy tools and I wholeheartedly support using them. However, if you’re serious about making your copy ‘compelling’, ‘informative’, or even (shudder) ‘convert’, then you’re going to have to do a lot more besides. The best writers are those algorithms are trying to imitate, not the other way around.
Whoever you are and whatever your discipline, your writing deserves attention. Whether it’s website copy, technical guides, or marketing agency material, developing your voice is the best way to communicate the things most important to you. By all means, use the tools at your disposal, but just don’t phone it in.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/readability-algorithms-should-be-tools-not-targets/ source https://scpie1.blogspot.com/2020/05/readability-algorithms-should-be-tools.html
0 notes
riichardwilson · 4 years
Text
Readability Algorithms Should Be Tools, Not Targets
About The Author
Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable … More about Frederick …
Readability programs may seem like a godsend, but the worst thing writers can do is write to please them above all others. Finding your voice is hard enough without also trying to sound like everyone else.
The web is awash with words. They’re everywhere. On websites, in emails, advertisements, tweets, pop-ups, you name it. More people are publishing more copy than at any point in history. That means a lot of information, and a lot of competition.
In recent years a slew of ‘readability’ programs have appeared to help us tidy up the things we write. (Grammarly, Readable, and Yoast are just a handful that come to mind.) Used everywhere from newsrooms to browser plugins, these systems offer automated feedback on how writing can be clearer, neater, and less contrived. Sounds good right? Well, up to a point.
As with most things, there’s an xkcd comic for this. (Large preview)
The concept of ‘readability’ is nothing new. For decades researchers have analyzed factors like sentence length, syllable count, and word complexity in order to ‘measure’ language. Indeed, many of today’s programs incorporate decades-old formulas into their scoring systems.
The Flesch-Kincaid system, for example, is a widely used measure. Created by Rudolf Flesch in 1975, it assigns writing a US grade level. The Gunning fog index serves a similar purpose, and there are plenty more where they came from. We sure do love converting things into metrics.
It’s no mystery why formulas like this are (quite rightly) popular. They help keep language simple. They catch silly mistakes, correct poor grammar, and do a serviceable job of ‘proofreading’ in a pinch. Using them isn’t a problem; unquestioning devotion to their scores, however, is.
No A-Coding For Bad Taste
I want to tread carefully here because I have a lot of time for readability algorithms and the qualities they tend to support — clarity, accessibility, and open communication. I use them myself. They should be used, just not unquestioningly. A good algorithm is a useful tool in the writer’s proverbial toolbox, but it’s not a magic wand. Relying on one too heavily can lead to clunkier writing, short-sightedness, and, worst of all, a total uniformity of online voices.
One of the beauties of the internet is how it melts national borders, creating a fluid space for different cultures and voices to interact in. Readability historically targets academic and professional writing. The Flesch-Kincaid test was originally developed for US Navy technical manuals, for example. Most developers can appreciate the value of clear documentation, but it’s worth remembering that in the world of writing not everything should sound like US Navy technical manuals. There are nuances to different topics, languages, and cultures that monosyllabic American English can’t always capture.
Deference to these algorithms can take writers to absurd lengths. Plain English is one thing, but unquestioning obedience is another. I’ve seen a good few sentences butchered into strings of words that tick readability boxes like ‘write in short sentences’ and ‘use monosyllabic words wherever possible’, but border on nonsensical to the human eye. It’s a near-impossible thing to quantify, but it has been a recurring phenomenon in my own work, and having spoken with other copywriters and journalists I know it’s not just my rampant paranoia at work.
Let’s look at the limitations of these tools. When faced with some of the greatest writers of all time — authors, journalists, copywriters, speech writers — what’s the verdict? How do the masters manage?
A Tale of Two Cities by Charles Dickens. The opening chapter receives a grade of E from Readable.
George Orwell’s essay ‘Politics and the English Language’, which bemoans how unclear language hides truth rather than expresses it. He gets a grade of D. Talk about having egg on your face!
The beginning of The Old Man and the Sea by Ernest Hemingway does tolerably well in the Hemingway Editor, though you’d have to edit a lot of it down to appease it completely.
A personal favorite that came up here was Ernie Pyle, one of the great war correspondents. His daily columns from the front lines during World War II were published in hundreds of newspapers nationwide. One column, ‘The Death of Captain Waskow’, is widely regarded as a high watermark of war reporting. It receives a grade of B from Readable, which notes the writing is a tad ‘impersonal.’ Have a read and decide for yourself.
Impersonal war correspondent Ernie Pyle. Credit: Indiana University. (Large preview)
Not all copywriting is literary of course, but enjoyable writing doesn’t always have to please readability algorithms. Shoehorning full stops into the middle of perfectly good sentences doesn’t make you Ernest Hemingway. I’m an expert in not being as good as Ernest Hemingway, so you can trust me on that.
Putting Readability Into Context
None of this is supposed to be a ‘gotcha’ for readability algorithms. They provide a quick, easy way to identify long or complex sentences. Sometimes those sentences need editing down and sometimes they’re just fine the way they are. That’s at the author’s discretion, but algorithms speed up the process.
Alternatively, if you’re trying to cut down on fluffy adverbs like ‘very’ you can do a lot worse than turning to the cold, hard feedback of a computer. Readability programs catch plenty of things we might miss, and there are plenty of examples of great writing that would receive suitably great scores when put through the systems listed above. They are useful tools; they’re just not infallible.
Algorithms can only understand topics within the confines of their system. They know what the rules are and how to follow them. Intuition, personal experience, and a healthy desire to break the rules remain human specialties. You can’t program those, not yet anyway. Things aren’t the done thing until they are, after all.
It’s a fine line between thinking your writing has to be clear, and thinking your readers are stupid. You stop seeing the woods for the trees. Every time I hear that the ‘ideal’ article length is X words regardless of the topic or audience, or that certain words should always be used because they improve CTR by 0.06%, I want to gauge my eyes out. Readability algorithms can make sloppy writing competent, but they can’t make good writing great.
Remember, when all is said and done, copy is written for people. From an SEO Company perspective, Google itself has made it clear in the past that readability should match your target audience. If you’re targeting a mass audience that needs information in layman’s terms, great, do that. If you produce specialized content for experts in a certain field then being more specialized is perfectly appropriate.
As Readable has itself explored, readability can be a kind of public good. Easy to read newspapers spread information better than obtuse ones do. Textbooks written for specific age groups teach better than highly technical ones do. In other words, understand the context you are writing in. Just remember:
“When a measure becomes a target, it ceases to be a good measure.”
— Goodhart’s Law
Find Your Voice
I have no beef with readability algorithms. My problem is with the laziness they can enable, the thoughtlessness. Rushing out a draft and running it through a readability tool is not going to improve your writing. As with any skill worth developing you have to be willing to put the hours in. That means going a step or two beyond blindly appeasing algorithms.
Not everyone has a lUXury of a great editor, but when you work with one, make full use of the opportunity. Pay attention to their suggestions, ask yourself why they made them. Ask questions, identify recurring problems in your writing and work to address them.
Analyse how the algorithms themselves work. If you’re going to use readability systems they should be supplemental to a genuine search for your own voice. Know how the things calculate scores, what formulas they’re drawing from. Learn the rules yourself. By doing so you earn the knowledge required to break them.
In his aforementioned essay George Orwell offers up his own approach to rules:
Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
Never use a long word where a short one will do.
If it is possible to cut a word out, always cut it out.
Never use the passive where you can use the active.
Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
Break any of these rules sooner than say anything outright barbarous.
These are founded on solid principles applicable to the web. Where did those principles come from? Not computers, that’s for sure.
Real editors and honest self-reflection do a lot more for your writing ability long term than obeying algorithms does. It all feeds back into your communication, which is an essential skill whether you’re a copywriter, a developer, or a manager. Empathy for other people’s work improves your own.
There is another essential thing good writers do: they read. No algorithm can paper over the cracks of an unengaged mind. Whatever your interests are I guarantee there are people out there writing about it beautifully. Find them and read their work, and find the bad writing too. That can be just as educational.
If you’re so inclined, you may even decide to get all meta about it and read about writing. If you’re not sure where to start, here are a handful of suggestions to get the ball rolling:
Also keep in mind that readability is not just a question of words. Design is also essential. Layout, visuals, and typography can have just as much impact on readability as the text itself. Think about how copy relates to the content around it or the device it’s being read on. Study advertising and newspapers and branding. On the other side of that sprawling jungle is your voice, and that’s the most valuable thing of all.
To reiterate one last time, readability algorithms are handy tools and I wholeheartedly support using them. However, if you’re serious about making your copy ‘compelling’, ‘informative’, or even (shudder) ‘convert’, then you’re going to have to do a lot more besides. The best writers are those algorithms are trying to imitate, not the other way around.
Whoever you are and whatever your discipline, your writing deserves attention. Whether it’s website copy, technical guides, or marketing agency material, developing your voice is the best way to communicate the things most important to you. By all means, use the tools at your disposal, but just don’t phone it in.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/readability-algorithms-should-be-tools-not-targets/ source https://scpie.tumblr.com/post/616932296630632448
0 notes
scpie · 4 years
Text
Readability Algorithms Should Be Tools, Not Targets
About The Author
Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable … More about Frederick …
Readability programs may seem like a godsend, but the worst thing writers can do is write to please them above all others. Finding your voice is hard enough without also trying to sound like everyone else.
The web is awash with words. They’re everywhere. On websites, in emails, advertisements, tweets, pop-ups, you name it. More people are publishing more copy than at any point in history. That means a lot of information, and a lot of competition.
In recent years a slew of ‘readability’ programs have appeared to help us tidy up the things we write. (Grammarly, Readable, and Yoast are just a handful that come to mind.) Used everywhere from newsrooms to browser plugins, these systems offer automated feedback on how writing can be clearer, neater, and less contrived. Sounds good right? Well, up to a point.
As with most things, there’s an xkcd comic for this. (Large preview)
The concept of ‘readability’ is nothing new. For decades researchers have analyzed factors like sentence length, syllable count, and word complexity in order to ‘measure’ language. Indeed, many of today’s programs incorporate decades-old formulas into their scoring systems.
The Flesch-Kincaid system, for example, is a widely used measure. Created by Rudolf Flesch in 1975, it assigns writing a US grade level. The Gunning fog index serves a similar purpose, and there are plenty more where they came from. We sure do love converting things into metrics.
It’s no mystery why formulas like this are (quite rightly) popular. They help keep language simple. They catch silly mistakes, correct poor grammar, and do a serviceable job of ‘proofreading’ in a pinch. Using them isn’t a problem; unquestioning devotion to their scores, however, is.
No A-Coding For Bad Taste
I want to tread carefully here because I have a lot of time for readability algorithms and the qualities they tend to support — clarity, accessibility, and open communication. I use them myself. They should be used, just not unquestioningly. A good algorithm is a useful tool in the writer’s proverbial toolbox, but it’s not a magic wand. Relying on one too heavily can lead to clunkier writing, short-sightedness, and, worst of all, a total uniformity of online voices.
One of the beauties of the internet is how it melts national borders, creating a fluid space for different cultures and voices to interact in. Readability historically targets academic and professional writing. The Flesch-Kincaid test was originally developed for US Navy technical manuals, for example. Most developers can appreciate the value of clear documentation, but it’s worth remembering that in the world of writing not everything should sound like US Navy technical manuals. There are nuances to different topics, languages, and cultures that monosyllabic American English can’t always capture.
Deference to these algorithms can take writers to absurd lengths. Plain English is one thing, but unquestioning obedience is another. I’ve seen a good few sentences butchered into strings of words that tick readability boxes like ‘write in short sentences’ and ‘use monosyllabic words wherever possible’, but border on nonsensical to the human eye. It’s a near-impossible thing to quantify, but it has been a recurring phenomenon in my own work, and having spoken with other copywriters and journalists I know it’s not just my rampant paranoia at work.
Let’s look at the limitations of these tools. When faced with some of the greatest writers of all time — authors, journalists, copywriters, speech writers — what’s the verdict? How do the masters manage?
A Tale of Two Cities by Charles Dickens. The opening chapter receives a grade of E from Readable.
George Orwell’s essay ‘Politics and the English Language’, which bemoans how unclear language hides truth rather than expresses it. He gets a grade of D. Talk about having egg on your face!
The beginning of The Old Man and the Sea by Ernest Hemingway does tolerably well in the Hemingway Editor, though you’d have to edit a lot of it down to appease it completely.
A personal favorite that came up here was Ernie Pyle, one of the great war correspondents. His daily columns from the front lines during World War II were published in hundreds of newspapers nationwide. One column, ‘The Death of Captain Waskow’, is widely regarded as a high watermark of war reporting. It receives a grade of B from Readable, which notes the writing is a tad ‘impersonal.’ Have a read and decide for yourself.
Impersonal war correspondent Ernie Pyle. Credit: Indiana University. (Large preview)
Not all copywriting is literary of course, but enjoyable writing doesn’t always have to please readability algorithms. Shoehorning full stops into the middle of perfectly good sentences doesn’t make you Ernest Hemingway. I’m an expert in not being as good as Ernest Hemingway, so you can trust me on that.
Putting Readability Into Context
None of this is supposed to be a ‘gotcha’ for readability algorithms. They provide a quick, easy way to identify long or complex sentences. Sometimes those sentences need editing down and sometimes they’re just fine the way they are. That’s at the author’s discretion, but algorithms speed up the process.
Alternatively, if you’re trying to cut down on fluffy adverbs like ‘very’ you can do a lot worse than turning to the cold, hard feedback of a computer. Readability programs catch plenty of things we might miss, and there are plenty of examples of great writing that would receive suitably great scores when put through the systems listed above. They are useful tools; they’re just not infallible.
Algorithms can only understand topics within the confines of their system. They know what the rules are and how to follow them. Intuition, personal experience, and a healthy desire to break the rules remain human specialties. You can’t program those, not yet anyway. Things aren’t the done thing until they are, after all.
It’s a fine line between thinking your writing has to be clear, and thinking your readers are stupid. You stop seeing the woods for the trees. Every time I hear that the ‘ideal’ article length is X words regardless of the topic or audience, or that certain words should always be used because they improve CTR by 0.06%, I want to gauge my eyes out. Readability algorithms can make sloppy writing competent, but they can’t make good writing great.
Remember, when all is said and done, copy is written for people. From an SEO Company perspective, Google itself has made it clear in the past that readability should match your target audience. If you’re targeting a mass audience that needs information in layman’s terms, great, do that. If you produce specialized content for experts in a certain field then being more specialized is perfectly appropriate.
As Readable has itself explored, readability can be a kind of public good. Easy to read newspapers spread information better than obtuse ones do. Textbooks written for specific age groups teach better than highly technical ones do. In other words, understand the context you are writing in. Just remember:
“When a measure becomes a target, it ceases to be a good measure.”
— Goodhart’s Law
Find Your Voice
I have no beef with readability algorithms. My problem is with the laziness they can enable, the thoughtlessness. Rushing out a draft and running it through a readability tool is not going to improve your writing. As with any skill worth developing you have to be willing to put the hours in. That means going a step or two beyond blindly appeasing algorithms.
Not everyone has a lUXury of a great editor, but when you work with one, make full use of the opportunity. Pay attention to their suggestions, ask yourself why they made them. Ask questions, identify recurring problems in your writing and work to address them.
Analyse how the algorithms themselves work. If you’re going to use readability systems they should be supplemental to a genuine search for your own voice. Know how the things calculate scores, what formulas they’re drawing from. Learn the rules yourself. By doing so you earn the knowledge required to break them.
In his aforementioned essay George Orwell offers up his own approach to rules:
Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
Never use a long word where a short one will do.
If it is possible to cut a word out, always cut it out.
Never use the passive where you can use the active.
Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
Break any of these rules sooner than say anything outright barbarous.
These are founded on solid principles applicable to the web. Where did those principles come from? Not computers, that’s for sure.
Real editors and honest self-reflection do a lot more for your writing ability long term than obeying algorithms does. It all feeds back into your communication, which is an essential skill whether you’re a copywriter, a developer, or a manager. Empathy for other people’s work improves your own.
There is another essential thing good writers do: they read. No algorithm can paper over the cracks of an unengaged mind. Whatever your interests are I guarantee there are people out there writing about it beautifully. Find them and read their work, and find the bad writing too. That can be just as educational.
If you’re so inclined, you may even decide to get all meta about it and read about writing. If you’re not sure where to start, here are a handful of suggestions to get the ball rolling:
Also keep in mind that readability is not just a question of words. Design is also essential. Layout, visuals, and typography can have just as much impact on readability as the text itself. Think about how copy relates to the content around it or the device it’s being read on. Study advertising and newspapers and branding. On the other side of that sprawling jungle is your voice, and that’s the most valuable thing of all.
To reiterate one last time, readability algorithms are handy tools and I wholeheartedly support using them. However, if you’re serious about making your copy ‘compelling’, ‘informative’, or even (shudder) ‘convert’, then you’re going to have to do a lot more besides. The best writers are those algorithms are trying to imitate, not the other way around.
Whoever you are and whatever your discipline, your writing deserves attention. Whether it’s website copy, technical guides, or marketing agency material, developing your voice is the best way to communicate the things most important to you. By all means, use the tools at your disposal, but just don’t phone it in.
(ra, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/readability-algorithms-should-be-tools-not-targets/
0 notes
republicstandard · 6 years
Text
Big Data, A.I., And The Future Of "Hate Speech"
Big Data has been an incredibly useful tool for businesses and consumers alike, as it helps businesses understand their consumers better and allows them to have much more detailed information regarding the consumer market. However, big data has the possibility to control our lives; and that possibility should not be ignored.
(function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:10817585113717094,size:[0, 0],id:"ld-7788-6480"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.lockerdomecdn.com/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");
“Hate speech” is still a somewhat hot topic, but this article is not about “hate speech” in the sense of what a private individual defines it, it is strictly about how the nation states themselves define it and what sorts of actions they take against it. Let us take a look at the Europe's hate speech laws and its history. Countries such as Latvia and South Africa also take action against “Hate Speech”. The fangs of “Hate Speech” entangles politicians and activists and even comedians. Geert Wilders has been in court because of his opinions on Moroccans, several members of Britain First have been imprisoned, and comedian Count Dankula has been convicted of teaching a dog to mock Nazis.
Several nation states have already started to act on their (new and old) hate speech laws. As time moves on, so will the inactive shock and passive rebellion diminish, leading to further implementations, actions and normalizations of these hate speech laws.
How does “Big Data” come into this? Keyword Analytics and Data Mining. Keyword Analytics is mostly the reason why those annoying Amazon suggestions come up whenever you do a Google search on a book or any particular item that can be bought from an online market, the way it works is that you generally buy keywords in that allows your website to be on the top whenever your bought keyword is used by someone else. Data Mining is starting to transform into the Nuclear Arms race of the Internet for several reasons, one of them being profit. More information about your customers means that you can further optimize your business to fit customer needs. A certain amount of data might be the difference between a successful business and an unsuccessful one. The means of getting a hold on this data might be a difficult path. Luckily for them, there are various social media websites and applications that have loads of data that are willing to sell for profit; one of them being Facebook, of course. This means that whatever you post on Facebook and various other websites and apps can be sold or given to various buyers that need this information. Standard business procedure, right?
Not all business agreements is between “company to company” and not all business agreements are “for profit”. Facebook’s data mining scandal gave us another reminder that Big Business looks at your info from a utilitarian standpoint rather than a humanistic standpoint, not a surprising fact, but an important fact to consider lest we forget again. Seeing as how our information and data is more of a resource than property that should never under ethical standards be transgressed on, we should consider possibilities and actualities that can change our lives. Out of many realities, one of them may be our “hateful” speech being limited. James O’Keefe’s journalistic piece on Twitter’s attittude towards “Pro-Trump and Conservative opinions” showcases this reality occuring right now.
youtube
Private businesses, groups and companies trade information for profit. So what about this is scary?
One interesting part of that video is that one of the engineers claim that the US government pressures Twitter to ban certain celebrities such as Julian Assange. Discord is a growing social messaging app/social media platform that is also “fighting hate speech”, and while it is relatively small at the current moment compared to other platforms, it is growing at a really fast pace, and it is also becoming one of the top social media platforms to use when it comes to politics as there are countless political Discord servers filled with hundreds (sometimes even thousands) of people. To recap, private businesses, groups and companies trade information for profit. So what about this is scary?
In short, a lot. One question is “who is buying the information?”, a basic simple answer would be “other companies looking for more information on potential customers to see what they like and want”. That’s not the only correct answer, what if businesses like Discord partnered with groups like SPLC and various other anti-free speech organizations? Oh wait, they already do. But what about their ToS? Isn’t there anything that blocks them from doing such a thing?
Discord App TOS
An exhortation to always read the small print if ever I saw one. Let us return now to the journalistic work from James O’Keefe and the part where one of the engineers talk about how they are pressured by the US government themselves. Today’s governments, for whatever reason, does not like dissenting opinions at all, of course saying that openly would be a death sentence, so they hide their intentions under the guise of “Hate Speech”.
Another facet of Big Data and AI in “combatting Hate Speech” are algorithms and machine learning. For people that are unfamiliar with “machine learning”, it is, in basic terms, the computer literally teaching itself what its supposed to do. Four months ago ADL released a video about their new tool to fight “Online Hate”, ADL claims that the learning model is “78%-85% accurate”. That is not a unlikely result.
youtube
As Europeans keeps on scratching their heads wondering what they should do, the poison disguised as an antidote is already being used in the USA. Antifa uses “in your face” street violence techniques whilst SPLC and ADL uses more legalistic methods. This double whammy has been successful at pacifying “wrongthink” for a very long time. The modern European mindset is a bit different from the modern American mindset; the American mindset loves to treasure their freedoms and rights, whilst Europeans really don’t put much emphasis on that in their lifestyles. This difference might enable European nation states to fight “Online Hate” using the US’ methods. The damage that European nation states can deal will be much higher than US since European nations have more legalistic ground to fight “hate speech”.
(function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:10817587730962790,size:[0, 0],id:"ld-5979-7226"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.lockerdomecdn.com/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");
One of the primary methods that I often talk about is the aforementioned “Keyword Analytics”. As we all know either by experience or knowledge, each group has their own niche language and inside jokes. It's a key aspect of friendships in human nature, the ability to find things amusing to which outsiders are not privy. This has visibly developed in the online realms, where Alt-Righters and far-righters have a very specific Imageboard style language-culture that is not that difficult to get a feel of for an average “Correct The Record” worker. If you, as a leader of a group or even as a nation state wanted to detect the persons and identities responsible for wrongthink, all you need to do is to use several algorithms that an engineer can easily manage to create, keywords, and then you will be able to detect tons of people that commit the horrid crime of wrongthink and “hate speech”. Nation states also have the ability to request data from companies such as Google, and they are more than happy to share information, especially if its really needed. AI and Big Data still has a lot of ground to cover; but it will cover it.
The future for online political discourse might be grim if Western governments play their cards right.
Thank you for reading Republic Standard. We publish this magazine and the Freebird Forum because we believe in free speech. Make a donation towards our running costs by clicking here.
The Republic Standard Web Shop is now open! Every piece of merchandise you buy is a victory against the nerds.
from Republic Standard | Conservative Thought & Culture Magazine https://ift.tt/2HUtkSI via IFTTT
0 notes
veohlinks · 7 years
Text
What is the Google Sandbox Theory?
Exactly what is the Google Sandbox Theory?
Ok, so over the past month or so I have actually been gathering different seo concerns from all you. Today, I’m going to answer what was the most frequently asked question over the past month. You thought it … Exactly what is the Google Sandbox Theory and how do I leave it? When you finish reading this lesson, you’ll be a specialist on the good ‘ole Google Sandbox Theory and you’ll understand ways to combat its impacts. So, pay very close attention. This is some really essential stuff. Prior to I start explaining what the Google Sandbox theory is, let me make a couple of things clear: The Google Sandbox theory is simply that, a theory, and lacks main verifications from Google or the benefit of years of observation. The Google Sandbox theory has actually been floating around because summer season 2004, and has actually just truly acquired steam after February 4, 2005, after a significant Google index upgrade (something called the old Google dance). Without being able to verify the existence of a Sandbox, much less its functions, it ends up being really difficult to create techniques to fight its effects. Practically everything that you will keep reading the Internet on the Google Sandbox theory is guesswork, pieced together from specific experiences and not from a widescale objective controlled experiment with numerous websites (something that would undoubtedly assist in figuring out the nature of the Sandbox, but is inherently unwise offered the demand on resources). Therefore, as I’ll be discussing towards the end, it’s crucial that you concentrate on · great’ seo strategies and not put excessive emphasis on fast · get-out-ofjail’ plans which are, after all, only going to last until the next huge Google upgrade. Exactly what is the Google Sandbox Theory? There are numerous theories that try describe the Google Sandbox impact. Essentially, the issue is basic. Web designers around the world started to observe that their brand-new sites, enhanced and chock filled with inbound links, were not ranking well for their selected keywords. In reality, the most typical scenario to be reported was that after being noted in the SERPS (search engine results pages) for a couple of weeks, pages were either dropped from the index or ranked incredibly low for their essential keywords. This pattern was found to websites that were created (by created I indicate that their domain was purchased and the website was signed up) around March 2004. All websites developed around or after March 2004 were stated to be suffering from the Sandbox impact. Some outliers escaped it completely, but webmasters on a broad scale had to deal with their websites ranking improperly even for terms for which they had enhanced their sites to death. Conspiracy theories grew greatly after the February 2005 update, codenamed · Allegra’ (how these updates are called I have no clue), when webmasters began seeing significantly changing results and fortunes. Well-ranked websites were loosing their high SERPS positions, while previously low-ranking sites had picked up speed to rank near the top for their keywords. This was a significant update to Google’s online search engine algorithm, however exactly what was intriguing was the apparent · exodus’ of sites from the Google Sandbox. This occasion provided the strongest proof yet of the existence of a Google Sandbox, and enabled SEO professionals to much better understand exactly what the Sandbox result had to do with. Possible explanations for the Google Sandbox Result A common description offered for the Google Sandbox impact is the · Time Hold-up’ factor. Essentially, this theory suggests that Google launches websites from the Sandbox after a set time period. Given that many webmasters started feeling the effects of the Sandbox around March-April 2004 and a great deal of those sites were · released’ in the · Allegra’ update, this · website aging’ theory has actually gotten a great deal of ground. Nevertheless, I do not find much reality in the · Dead time’ factor since by itself, it’s just an artificially enforced penalty on websites and does not enhance significance (the Holy Grail for search engines). Since Google is the de facto leader of the search engine industry and is constantly making strides to improve significance in search outcomes, strategies such as this do not fit in with exactly what we know about Google. Contrasting proof from many websites has revealed that some websites developed before March 2004 were still not launched from the Google Sandbox, whereas some websites developed as late as July 2004 managed to escape the Google Sandbox impact during the · Allegra’ update. Along with shattering the · Dead time’ theory, this likewise raises some interesting questions. This evidence has actually led some webmasters to recommend a · link limit’ theory; when a site has collected a specific quantity of quantity/quality incoming links, it is launched from the Sandbox. While this might be closer to the truth, this can not be all there is to it. There has actually been proof of sites who have actually gotten away the Google Sandbox effect without enormous linkbuilding campaigns. In my opinion, link-popularity is certainly a consider determining when a website is launched from the Sandbox but there is one more caveat connected to it. This idea is called · link-aging’. Generally, this theory mentions that websites are released from the Sandbox based upon the · age’ of their inbound links. While we just have actually restricted information to evaluate, this seems to be the most likely explanation for the Google Sandbox result. The link-ageing concept is something that puzzles people, who normally think about that it is the site that needs to age. While conceptually, a connect to a website can only be as old as the website itself, yet if you have do not have adequate inbound links after one year, common experience has it that you will not have the ability to escape from the Google Sandbox. A fast hop around popular SEO forums (you do go to SEO forums, do not you?) will lead you to numerous threads going over different results · some websites were launched in July 2004 and left by December 2004. Others were stuck in the Sandbox after the · Allegra’ upgrade. Ways to discover out if your website is sandboxed Discovering if your site is · Sandboxed’ is rather basic. If your website does not appear in any SERPS for your target list of keywords, or if your results are extremely dismal (ranked somewhere on the 40 th page) even if you have great deals of incoming links and almostperfect on-page optimization, then your website has been Sandboxed. Concerns such as the Google Sandbox theory have the tendency to sidetrack web designers from the core · great’ SEO practices and unintentionally press them to black-hat or quick-fix strategies to make use of the search engine’s weak points. The issue with this technique is its short-sightedness. To discuss exactly what I’m speaking about, let’s take a little detour and discuss online search engine theory. Comprehending search engines If you’re looking to do some SEO, it would assist if you attempted to understand what online search engine are aiming to do. Browse engines wish to provide the most pertinent information to their users. There are two issues in this · the unreliable search terms that individuals utilize and the info glut that is the Internet. To counteract, online search engine have actually developed significantly intricate algorithms to deduce relevance of content for various search terms. How does this assistance us? Well, as long as you keep producing highly-targeted, quality material that is relevant to the topic of your site (and get natural inbound links from related websites), you will stand a likelihood for ranking high in SERPS. It sounds unbelievably simple, and in this case, it is. As search engine algorithms evolve, they will continue to do their tasks much better, therefore progressing at removing garbage and presenting the most appropriate content to their users. While each online search engine will have different methods of figuring out online search engine placement (Google values incoming links quite a lot, while Yahoo has actually just recently put additional worth on Title tags and domain), in the end all search engines aim to achieve the very same goal, and by intending to satisfy that objective you will constantly have the ability to make sure that your site can accomplish a good ranking. Escaping the sandbox … Now, from our conversation about the Sandbox theory above, you understand that at best, the Google Sandbox is a filter on the online search engine’s algorithm that has a moistening impact on websites. While a lot of SEO professionals will tell you that this effect decreases after a particular period of time, they wrongly accord it to site aging, or essentially, when the site is very first spidered by Googlebot. In fact, the Sandbox does · holds back’ new sites but more notably, the results lower in time not on the basis of site aging, but on link aging. This means that the time that you spend in the Google Sandbox is directly connected to when you start acquiring quality links for your website. Hence, if you not do anything, your website might not be released from the Google Sandbox. However, if you keep your head down and keep up with a low-intensity, long-lasting link structure plan and keep adding inbound connect to your website, you will be released from the Google Sandbox after an indeterminate duration of time (but within a year, most likely 6 months). To puts it simply, the filter will stop having such a huge result on your website. As the · Allegra’ upgrade revealed, websites that were constantly being enhanced throughout the time that they were in the Sandbox began to rank quite high for targeted keywords after the Sandbox effect ended. This and other observations of the Sandbox phenomenon · combined with an understanding of search engine philosophy · have lead me to identify the following strategies for decreasing your site’s · Sandboxed’ time SEO techniques to reduce your website’s “sandboxed” time. Despite exactly what some SEO specialists might inform you, you don’t require do anything various to get away from the Google Sandbox. In reality, if you follow the · white hat’ guidelines of seo and deal with the principles I’ve discussed many times in this course, you’ll not just reduce your website’s Sandboxed time however you will likewise guarantee that your website ranks in the top 10 for your target keywords. Here’s a list of SEO techniques you must make certain you utilize when starting a new site: Start promoting your website the minute you create your website, not when your site is · ready’. Do not make the error of awaiting your website to be · best’. The motto is to obtain your item out on the market, as rapidly as possible, and then fret about improving it. Otherwise, how will you ever start to earn money? Establish a low-intensity, long-term link building strategy and follow it consistently. For example, you can set yourself a target of obtaining 20 links per week, or perhaps even a target of contacting 10 link partners a day (naturally, with SEO Elite, link structure is a snap). This will guarantee that as you develop your site, you also begin obtaining incoming links and those links will age appropriately · so that by the time your website exits the Sandbox you would have both a high quantity of inbound links and a prospering website. Avoid black-hat strategies such as keyword stuffing or · masking’. Google’s search algorithm develops nearly daily, and charges for breaking the guidelines might keep you stuck in the Sandbox longer than normal. Save your time by remembering the 20/80 guideline: 80 percent of your optimization can be accomplished by simply 20 percent of effort. After that, any tweaking left to be done is specific to present online search engine propensities and liable to end up being ineffective as soon as an online search engine updates its algorithm. For that reason don’t lose your time in optimizing for each and every online search engine · just get the fundamentals right and proceed to the next page. Remember, you should always optimize with the end-user in mind, not the online search engine. Like I mentioned previously, online search engine are continuously enhancing their algorithms in order to improve on the key requirements: relevance. By ensuring that your website content is targeted on a particular keyword, and is evaluated as · excellent’ content based on both on-page optimization (keyword density) and off-page aspects (lots of quality inbound links), you will likewise guarantee that your site will keep ranking highly for your search terms no matter what modifications are brought into a search engine’s algorithm, whether it’s a dampening factor a la Sandbox or other peculiarity the search engine market tosses up in the future. Have you taken a look at SEO Elite yet? If not … What’s stopping you? Now, go out there and begin smoking the search engines!
The post What is the Google Sandbox Theory? appeared first on Veoh Links.
http://ift.tt/2irsvth
0 notes