On April 18, 2016 Googlebot will be debuting a brand new user-agent for the smartphone Googlebot. This means in your log reporting, you will see a different user-agent name for this bot after April 18th.
Google’s new user agent reflects that it is evolving towards being more similar to Chrome, rather than Safari. The updated user-agent also means that the “renderer can better understand pages that use newer web technologies.”
Here is the new smartphone user-agent, which will begin crawling on April 18, 2016.
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.
And here is Google’s current smartphone user-agent.
Mozilla/5.0 (iPhone; CPU iPhone OS 8_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12F70 Safari/600.1.4 (compatible; Googlebot/2.1; +http://www.google.com/bot.
For those who are only allowing bots based on IP, Google has confirmed that it will still be crawling from the same set of IP addresses. That said, only allowing Googlebot to crawl from specific IPs only can cause issues if you do not watch very closely for new possible Googlebot IPs. There have been many instances where a new Googlebot IP can cause crawl and indexing issues because a site is blocking it based on IP rather than user-agent to prevent any spoofed bots from crawling.
Google anticipates that the only sites that will have problems with the change are those who are looking for the specific Googlebot user-agent string.
Our evaluation suggests that this user-agent change should have no effect on 99% of sites. The most common reason a site might be affected is if it specifically looks for a particular Googlebot user-agent string. User-agent sniffing for Googlebot is not recommended and is considered to be a form of cloaking. Googlebot should be treated like any other browser.
Some firewall programs on websites allow Googlebot based on user-agent string only. While this in itself isn’t against any Google’s guidelines, serving different content to that user-agent than what a regular visitor sees is a problem and can result in a manual action.
Google’s fetch and render tool has already been updated to reflect the new user-agent, so if you are concerned that you could be inadvertently blocking the new user-agent, you can check to make sure your site is being served and looks as intended.
Likewise, if you are a tool or software creator that is basing anything on user-agent strings to allow or block traffic to a site, you will want to ensure you update for your users prior to April 18, 2016.
Latest posts by Jennifer Slegg (see all)
- Google: Time to See Higher Rankings From Improving Content Quality - October 23, 2017
- Google: 301s & 302s, What is Best For PageRank & Ranking Signals - October 18, 2017
- Why Google Rewrites Title Tags in Search Results - October 18, 2017
- How Google Determines Which Ads & Search Features to Show on Search Results Page - October 17, 2017
- Removing Low Quality Pages Won’t Result in Sitewide Google Rankings Boost - October 13, 2017