At SMX West, Gary Illyes talked about how duplicate content is not a penalty, it is merely a filter. In today’s Google Webmaster Office Hours Hangout, John was asked a question about duplicate content and whether it could trigger an algorithmic penalty or lower your rankings.
Essentially, he said that Google will just pick one of the duplicates to display, but they generally will not demote a site… however there is one important exception.
Here is the excerpt:
This is kind of a tricky question in the sense that if just pages of your website are duplicated across the web, maybe you have your main website in the marketing website and it’s exactly the same content, you just use one maybe for off-line advertising something like that, then in most situations we all recognize that and just pick one of these URLs to show it in search.
That’s not something where we demote a website for having this kind of duplication, be it internally on the website or across websites like that. Essentially what we do there is we’ll try to recognize that these pages are equivalent, and fold them together in the search results.
So it’s not that they’ll rank lower, it’s just that we’ll show one of these because we know these are essentially equivalent and just show one in search. And that’s not something that would trigger a penalty or that would lower the rankings, that’s not a negative signal from Google.
John continues talking about technical aspects of rel=canonical to solve duplicate content issues. Then he jumps into the exception to the duplicate content is not a penalty.
The types of situations where we might take for example manual action on duplicate content is more if one website is just a compilation of content from a bunch of other websites where we kind of look at this website and see they’re scraping the New York Times, they’re scraping some other newspaper sites, they’re rewriting some articles, they’re spinning some other articles here, and we can see that this is really just a mix of all different kinds of content sources and that there’s really no additional value in actually even bothering to crawl this website.
And that’s the type of situation where the web spam team may take a look at that and say well we really don’t need to waste our resources on this, we can essentially just drop this out of the index. And when the webmaster is ready and has something unique and compelling on the website and kind of has removed all of this duplicated content then we can talk about a reconsideration request and go through that process there.
So just because there’s some duplication across the web from your content, I wouldn’t really worry about it. If on the other hand your website is just an aggregation from content all over the web, then that’s something I would take action on and just cleanup and really provide something new and unique and compelling of your own, something that high-quality and not just rewritten or copied content from other sources.
In this case, it would still be a manual penalty, it isn’t something that the duplicate content would trigger algorithmically. And it is also a common sense manual action that anyone with SEO 101 could identify as being the problem. But it is definitely a case where duplicate content could cause an issue.
Here is the full video:
Latest posts by Jennifer Slegg (see all)
- No Plans for Google to Mark HTTP as Insecure in Search Results - September 22, 2017
- Google: Do HTTPS Migrations Separate From Other Major Changes - September 22, 2017
- Google: Rankings Should Remain Stable With HTTPS Migrations - September 21, 2017
- Google: Value (or Not) of Doing Link Audits - September 20, 2017
- Google Indexes AMP Version for Mobile First When No Regular Mobile Page - September 19, 2017