JavaScript and search engines have always had a tricky relationship, and SEO is often used as an argument against single page applications. I’m aiming to put some of those misconceptions to rest in this article and show you that it’s perfectly possible to create a pure JavaScript web application and still have good SEO.
To demonstrate, take a look at the search results for Monocle, a single page web app. You can see that even though the application relies on JavaScript, articles are still getting fetched and indexed correctly.
While there are indications that Google’s spiders can index some urls they find in JavaScript, in my experience they haven’t been able to render and index complex JS web apps without a little help. The key to spidering JS apps lies in Google’s Ajax crawling specification.
The Ajax crawling specification was originally intended for JS apps that use the hash fragment in the URL, which was a popular technique for creating permalinks when the spec was initially developed. However, we can still use the same spec, with a few tweaks, for modern JS apps using HTML5’s pushState to modify the browser’s URL and history.
Firstly, add the follow meta tag onto every page that needs to be spidered:
<meta name="fragment" content="!">
This will instruct Google’s spider to use the Ajax crawling specification with your site. When it sees this tag it’ll then proceed to fetch your site again, this time with the _escaped_fragment_
parameter. We can detect this query parameter, and serve up spider safe content.
You can see an example of this on Monocle, for the index page, and also the post page. As you can see, if the query param is present (and not even set to a value), I serve up raw HTML instead of the JS app.
The code to do so is pretty straightforward. I’m using Sinatra, but the example below should give you a good indication on how to implement this in your framework of choice. I have two routes which are conditional on _escaped_fragment_
being present as a parameter.
helpers do
set :spider do |enabled|
condition do
params.has_key?('_escaped_fragment_')
end
end
end
get '/', :spider => true do
@posts = Post.published.popular.limit(30)
erb :spider_list
end
get '/posts/:slug', :spider => true do
@post = Post.first!(slug: params[:slug])
erb :spider_page
end
Make sure that you provide at least a title, meta description, header and text content on each page. Also make sure the meta description matches what you want to be displayed on the search results page.
And that’s all you should need to do. You now have all the user-experience benefits of a JS web application without any of the SEO drawbacks.
One other technique that was pointed out to me was rendering html content straight into a <noscript>
tag embedded in the page. I prefer the Ajax crawling spec approach though, as it means you’re not forced to do any SQL requests or rendering unnecessarily for non-bots.