X
    Categories: GoogleMobileQuality Rater's GuidelinesSEO

Insights from Google’s Voice Search Quality Rater Guidelines

Google has released quality rater guidelines specifically for voice search, the first time they have published guidelines specifically around the evolving world of voice search and assistant responses for search queries.  While most will refer to them as the voice search quality rater guidelines, the official name is “Evaluation of Search Speech – Guidelines.”

For those wondering how this fits into the regular quality rater guidelines, this seems to be a stand-alone document on its own, as it has its own version number. However, I wouldn’t be surprised to see this added to the regular quality rater guidelines in the future. That does mean we could be waiting a while until we get a new update for that though since these voice guidelines were released independently.  Or it could mean Google plans to update and evolve these guidelines more frequently, which necessitates it being separate from the regular guidelines.  We last saw an update to the regular Quality Rater Guidelines in July 2017.

These guidelines do share a lot of similarities to the regular quality rater guidelines, and Google does reference them in these new guidelines, such as for the Needs Met rating. But there is an entirely new rating system that is meant specifically for voice search, particularly when the answer is read by a Google assistant to the searcher – or the rater – and how well the answer performs.

It also leads to the possibility that these quality reader guidelines for voice could be connected to the recent volatility in the search results since now raters can test specifically for voice specific queries as Google seeks to improve the quality of those results and snippets.  Featured snippets tend to be highly volatile already and it isn’t unusual to see a featured snippet change daily for competitive search queries both as Google makes algo adjustments and as site owners respond by tweaking their own pages to earn those snippets or to regain them after losing them.

Needs Met for Voice

Site owners can read a great deal into the documentation. While some answers are factoids which are extracted from the knowledge graph, many voice queries are often featured snippets that Google is reading out to the searcher.

It’s also worth noting the brand reinforcement with some of these answers. As the answer is pulled from a featured snippet, Google reinforces websites and brands when they read out a featured snippet as an answer, as many of the answer start with “According to XYZ, <answer>”.  Google mentions in these voice guidelines that they specifically name the website that the information is retrieved from, so that searchers can visit that site later for more information.

Some users might want additional information, and that is made available on the referenced website. The user will receive a link to the specific page.

This is good news, as many site owners were concerned that Google might drop the “According to XYZ” citation that is read aloud during these types of search results.  It seems this is a conscious decision by Google to do this.

The Needs Met quality raters rating is very close to how it is used in the regular text-based search results for the rater guidelines.  Just with how a webpage is evaluated based on how well the landing page meets the needs of the search, this rating is applied based on the featured snippet or factoid that Google reads out in response to the voice query.

The rating is also similar for app based results, such as playing a specific song or video which is also included in the regular quality rater guidelines. But in these examples, the queries are voice-based instead of textual.

Many site owners are very concerned if their site could potentially get a Slightly Meets rating.  But in the case of voice search, Google is selecting one and only one result from the potential 10 or so results of the page. And a site owner has less control over which site Google is choosing for the answer and how well that answer fits with the specific query.

But it does come down to featured snippets and the fact. If you can steal the featured snippet away from a competitor, then yours will be the answer that is read out by Google. You can then try and tailor that answer so that it would be rated higher when you do land the featured snippet spot. And of course, Google wants to select better answers to be read out.

Because searchers are only evaluating a single result for voice queries, typically the featured snippet or that factoid pulled from the knowledge graph, and the fact these guidelines were first made available to raters on December 13, 2017, it is possible to some of the fluctuations we’ve seen over the last month or so are directly related to featured snippets and the volatility around results being adjusted based on feedback.  While quality raters do not directly impact the search results, they are used to test algos to determine what Google is getting wrong and what Google is getting right, either before unleashing a new algo live in the search results or to test the effectiveness of the current algo.

Speech Quality Rating

Google is also evaluating the boy search results based on how the Google assistant response verbally.  This is not whether the information is accurate, but how well it is presented verbally.

Length

It includes the length of the verbal response and whether it was detailed or not detailed enough. Some answers can be very short, such as how tall a specific building is. While some would require a more in-depth and longer answer, such as a definition.

The length can be rated anywhere between “Too Short” and “Too Long”.

Formulation

The formulation, which Google is including things such as how grammatically correct the answer was, was a correct for the native language being used, and whether the source, such as the website name was clear and understandable.

There have been some complaints by site owners the Google is not pronouncing their brand or their website name correctly. However, site owners can also submit feedback from the text search result page for the same query, by using the feedback link underneath the featured snippet.

With grammar, this means that site owners who have their content used for featured snippets will need to ensure their content is grammatically correct, or at least close to it.

Formulation can be rated as “Bad”, “Moderate” or “Good”, as well steps in between those three.

Elocution

This is interesting as Google is also wanting raters to consider elocution of the voice response and for it to be evaluated. This is likely to try and prevent issues where the assistant voice sounds “too robotic”, which is often a complaint raised with many kinds of robot voice response platforms.  They are also rating based on the verbal speed of the response, awkward rhythm to the response or if there’s any mispronunciation.

Elocution can also be rated as “Bad”, “Moderate” or “Good”, as well steps in between those.

Rating

An interesting aspect of this particular section of the voice quality rater guidelines is that you can have a “good” rating for one part of the three, such as elocution, while the others don’t rate as well.

Final Thoughts

While these guidelines are nearly as in-depth as the regular Google quality rater guidelines, it does have some insights for those that are trying to earn featured snippets or to steal those from a competitor.  Because voice search is primarily just a one answer result that the assistant is reading out loud, it is very important for Google to get these answers as accurate as possible. If their search results decline, they run the risk of searchers going to another search engine instead.

I expect site owners and searchers will see featured snippets be even more volatile this year than they have in the past – and they have already been pretty volatile previously.  But that means much more opportunity for site owners to get those snippets.

For more on Google’s quality rater guidelines, please see all our updates here.

You can view and download the Google Voice Quality Rater Guidelines here.

The following two tabs change content below.

Jennifer Slegg

Founder & Editor at The SEM Post
Jennifer Slegg is a longtime speaker and expert in search engine marketing, working in the industry for almost 20 years. When she isn't sitting at her desk writing and working, she can be found grabbing a latte at her local Starbucks or planning her next trip to Disneyland. She regularly speaks at Pubcon, SMX, State of Search, Brighton SEO and more, and has been presenting at conferences for over a decade.
Jennifer Slegg :Jennifer Slegg is a longtime speaker and expert in search engine marketing, working in the industry for almost 20 years. When she isn't sitting at her desk writing and working, she can be found grabbing a latte at her local Starbucks or planning her next trip to Disneyland. She regularly speaks at Pubcon, SMX, State of Search, Brighton SEO and more, and has been presenting at conferences for over a decade.