Did you recently receive an email from Google Search Console that Googlebot cannot access CSS and JS files on your site? Many webmasters (including myself) are reporting that they received the message, causing a flurry of forum posts and messages. While some people seemed panicked at first, this notice doesn’t mean that they can’t see your site; only parts of it can’t be accessed. While there isn’t a reason to sound the alarm, we do want to always be as “Google Friendly” with websites to help gain organic exposure.

Screenshot of email from Google finding issues with accessing CSS and JS Files.

Screenshot of email from Google finding issues with accessing CSS and JS Files.

It looks like Google is now sending these messages out en masse through the search console to notify webmasters that there is an issue with Google accessing certain parts of your site. The email itself does come off as a bit alarming since we of course want Google to be able to easily access our site. So, what does this all mean?

It looks like most of the people reporting that they received this message on WordPress sites. Most WordPress installations typically block the /wp-admin/ folders in the robots.txt file. The original intent of blocking this folder would be so that the core WordPress files are blocked so that you can have a more secure WordPress installation. However, if you linked to certain sections of your /wp-admin/ folder, then they would still be easily found which defeats the purpose.

Considering website security as a priority, it would make sense to keep these folders blocked. However, what most don’t know is that WordPress implemented a fix for this a while back that allows these folders to be accessed by Google while keeping security top of mind. As Joost de Valk points out, we should allow WordPress to handle this on its own:

 If you don’t do anything, WordPress has (by my doing) a robots meta x-http header on the admin pages that prevents search engines from showing these pages in the search results, a much cleaner solution.”

What Caused The Email?

After looking at my site’s Robots.txt file, I found the following that appears to be causing issues with Googlebot. Either through a plugin or a link on my site, I must have been making a reference to the /wp-includes/ folder.

User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/

Ok great, now what?

The quick fix would be to update the robots.txt file to be the following resources that Google is trying to access:

User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
allow: /wp-content/plugins/
allow: /wp-content/themes/

The second method is a bit more refined since we can allow access just for Googlebot and disallow access for other user-agents.

User-agent: Googlebot
Allow: *.css
Allow: *.js

# Other bots
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/

Once you are done editing your robots.txt file, you can test the update on the Fetch as Google page and see what the results are to make sure that you identified the issue.

Fetch as Google

At the end of the day, this is something that Google rolled out about a year ago, but is just now sending out notices. With that being said, we always want to do everything we can to have our sites be as Google friendly as possible and this is a pretty quick adjustment that only takes a few minutes to implement if you are able to.

Did you receive one of the emails? Have another solution to share? Let me know in the comments!

4 thoughts on “Fixing Googlebot CSS & JS Access Issues

  1. Cameron Davis

    Good info Jared!

    Also, if Google is having trouble with CSS and JS info, that is also one of the things that is frequently listed as contributing to slow site-load speeds. So there is a dual-benefit in getting this cleaned up.

  2. Jasmine

    Actually had to handle a few of these myself as Search Console rolled out alerts for some sites.
    Quick fix, but you’re right – always safer to err on the side of Google.

    Great tips, Jared!

  3. Sebastian

    Interesting solution – but it’s important to point out that robots.txt are not additive, so Googlebot will only obey it’s specific rule and not also respect the wildcard one (you can test that in the search console robots.txt – the second method would allow Gbot to crawl everything on the site, including e.g. /wp-admin/secret.html). Just add the two disallow lines above the allows in the Googlebot rule, and it works as intended!


Leave a Reply

Your email address will not be published. Required fields are marked *