3 readability tests to improve documentation

How reader-friendly are your docs?

Posted 04 Nov 2015 by 

Laura Novich (Red Hat)
up
81 readers like this
typewriter keys
Image credits : 

Original photo by mpclemens. Modified by Rikki Endsley. CC BY-SA 2.0

The first task any accomplished technical writer has to do is write for the audience. This task may sound simple, but when I thought about people living all over the world, I wondered: Can they read our documentation? Readability is something that has been studied for years, and what follows is a brief summary of what research shows.

Studies prove that people respond to information that they can easily understand. The question is: Are we writing content that the average person can easily read and understand? If people are not connecting with our content, one reason could be that we are writing "over their heads," which happens more often than you might think. In an effort to sound superior, intelligent, or as experts in our fields, many people will overwrite content, or use big words to make the most printed material space.

A simple way to check your document to see whether it is easy to read is to use a readability test. Many different tests have been created for this purpose, and three of the most popular are:

  1. Flesch Reading Ease
  2. Flesch-Kincaid
  3. Gunning Fog Index

Popular readability tests

Flesch Reading Ease Test

Rudolf Flesch, author of Why Johnny Can't Read: And What You Can Do About It created the Flesch Reading Ease Test as a way to further advance his belief that American teachers needed to return to teaching phonics rather than sight reading (whole word literacy). His work and advocacy for reading and phonics were the inspiration for Dr. Seuss to write The Cat in the Hat. This test tells us how easy it is to read the text. The algorithm is as follows:

algorithm

Figure via Wikipedia. CC BY-SA 3.0

The resulting score is interpreted as follows:

table

Table via WikipediaCC BY-SA 3.0

What does this mean?

  • The lower the score, the harder the text is to read
  • 65 is the "Plain English" rating

How does this score measure up to well-known publications? [1]

  • Reader's Digest: 65
  • Time Magazine: 52
  • Harvard Law Review: >40

Flesch-Kincaid Grade Level Readability Test

The Flesch-Kincaid reading test is the result of a collaboration between Rudolf Flesch (mentioned above) and J. Peter Kincaid. J. Peter Kincaid is an educator and scientist who spent his time working in academia or researching with the U.S. Navy. J. Peter Kincaid developed his version of the readability test while under contract with the Navy in an effort to estimate the difficulty of technical manuals. The Flesch-Kincaid Grade Level Readability Test translates the test to a United States grade level, which judging whether the material is readable by others easier. The algorithm is as follows:

algorithm

Figure via WikipediaCC BY-SA 3.0

The result corresponds to a U.S. grade level, so once the score is calculated, we know who can understand our writing. For example, President Obama's 2012 State of the Union address has a grade level of 8.5; however, the Affordable Care Act has a readability level of 13.4 (university or higher). The results of a the readability of a few popular books may surprise you:

Gunning Fog Index

The Gunning Fog Index was created in 1952. The algorithm is as follows:

algorithm

Figure via WikipediaCC BY-SA 3.0

This index is not perfect as some words (such as university) are complex but easy to understand, whereas short words (such as boon) may not be as easy to understand. Given that, the results can be interpreted as follows [2]:

  • A fog score of >12 is best for U.S. high school graduates
  • Between 8-12 (closer to 8) is ideal
  • <8 is near universal understanding

Why should I care about the readability of my writing?

If our writing is too hard to read, then no one will want to read it. The sad truth is that approximately 50% of Americans read at an eight-grade level. The higher the grade level our writing is, the fewer people can read it. If we struggle to read something, our experience with the content will be negative, and this negative experience makes us less likely to recommend the content to someone else. Have you ever recommended a book you did not enjoy reading? The same goes with documentation.

How do I calculate the readability of my writing?

There are several ways to calculate readability. The easiest way to calculate it is within a word processor or editing tool. For example, Publican is a publishing tool based on DocBook XML. Publican version 4.0.0 included the addition of a Flesh-Kincaid Statistics Info feature, which lets users run the following command:

$ publican report --quiet

This will generate a readability report.

If you are using vim as your text editor, vim-readability plug-ins can be downloaded and installed from GitHub (thanks to Peter Ondrejka). A similar plug-in, gulpease, is also available for gedit. To check readability without using a plugin, copy and paste the text at Readability-Score.com.

Final thoughts

Keep it simple, sweetheart! The easier our documentation is to understand, the more people will use it. In case you're curious, this article has a readability of:

  • Flesch: 68.5
  • Flesch-Kincaid: 6.9
  • Gunning Fog: 9.3

Once we know the readability of our writing, we can simplify it, if necessary. I will outline ideas for doing that in my next article. Stay tuned!

Sources

  1. Flesch–Kincaid readability tests (Wikipedia)
  2. Gunning fox index
  3. This Surprising Reading Level Analysis Will Change the Way You Write, by Shane Snow
  4. Readability-Score.com
Doc Dish

This article is part of the Doc Dish column coordinated by Rikki Endsley. To contribute to this column, submit your story idea or contact us.

8 Comments

bcotton

This is awesome. I've used Publican for years, but I didn't know about the report option added in 4.

Vote up!
1
Vote down!
0
dragonbite

I'm dubious when mixing math and language arts, but this has me curious.

I usually pass it by family members to get an idea of its acceptance by non-technical, technical and more-technical audience.

:)

Vote up!
1
Vote down!
0
DAG

Also, Perl scripting language has a module "Lingua::EN::Fathom" that can produce values for these three test. Very smooth! Must start using this on my blog posts.

Vote up!
1
Vote down!
0
Greg P

Another aspect of readability and understandability is to consider non-native English speakers.
There is clearly an increasing ability for people all around the world to understand English, but even so, you should be careful with figurative uses of language, which in many cases could easily pass these tests on complexity, yet be hard to translate to some other language. There are also some idiomatic phrases which have different meanings, depending on the context.

Vote up!
1
Vote down!
0
lnovich

Well said Greg, which is why localization is more important than translation as idioms need to be localized as well as translated to make sure the message conveys the same meaning and intention.
Thanks for commenting!
Laura

Vote up!
0
Vote down!
0
clhermansen

Laura, this is a GREAT article for all sorts of reasons. First, encouraging all of us to think about the readability of our work is an excellent idea; thank you for that! Second, having an automated diagnostic is pretty cool. Third, a tool that specifically addresses overall readability of the text is, on its own right, pretty interesting. In fact it made me go looking for other such tools and I found this:

https://github.com/bookieio/breadability

which not only talks about itself (as a tool) but gives some other alternatives.

Vote up!
1
Vote down!
0
lnovich

Thank you Chris!

Vote up!
0
Vote down!
0
dick moran

Never use an acronym in a document unless the meaning of the acronym has been previously defined in the document ** AND ** has been been included in a glossary at the end of the document. The only exception I can think of is USA when the context makes it clear that it refers to a geographic area.

Vote up!
0
Vote down!
0

Laura Novich is a veteran technical writer who began her career in 1997 as a technical writer for 3Com. Since then, she has worked for a few start-ups and now works as a technical writer for Red Hat, specializing in virtualization products. Laura is an active maintainer and mentor to the Fedora open source community and is passionate about the principles of open source software, collaboration, innovation, and inclusion. She has a Masters degree in TESOL and has 9 years of teaching experience