Skip to Content
Jason Sanderson, Senior Technical SEO Strategist

The author

Jason Sanderson

Senior Technical SEO Strategist

This time last year, Google announced that the mobile specific crawler “Googlebot-Mobile” was being replaced by Googlebot for smartphones. This meant mobile crawlers would announce themselves via the same Googlebot as the desktop counterpart.

This week, it was announced that the mobile crawler will also respect any commands found for “Googlebot” in a websites robots.txt file; ignoring any “Googlebot-mobile” commands.

Any pages which block the Googlebot crawler will now display the message “A description for this result is not available because of this site's robots.txt”, instead of the intended Meta description.

This is only really a problem for websites which return different content for mobile and desktop users (for example a mobile specific sub-domain or dynamic content serving), as this different content often owns a separate Robots.txt file blocking “Googlebot” from all pages and only allowing “Googlebot-mobile”.

An example file:

User-agent: Googlebot
Disallow: /
User-agent: Googlebot-Mobile
Allow: /


To rectify this, any mobile specific robots.txt files should be amended so that Googlebot can access all mobile pages, removing any references to the now unused “Googlebot-mobile”.

It is also worthwhile to make sure that Googlebot can access any JavaScript, CSS and Image directories at the same time, as blocking these can also greatly affect how Google handles your content.

Want to know how this could affect you? Get in touch.