Posted by RobOusbey
Many people have an interest in building websites that take advantage of AJAX principles, while still being accessible to search engines. This is an important issue that I've written about before in a (now obsolete) post from 2010. The tactic I shared then has been superseded by new technologies, so it's time to write the update.
This topic is still relevant, because of a particular dilemma that SEOs still face:
- websites that use AJAX to load content into the page can be much quicker and provide a better user experience
- BUT: these websites can be difficult (or impossible) for Google to crawl, and using AJAX can damage the site's SEO.
The solution I had previously recommended ends up with the #! (hashbang) symbols littering URLs, and has generally been implemented quite poorly by many sites. When I presented on this topic at Distilled's SearchLove conferences in Boston last year, I specifically called out Twitter's implementation because it 'f/#!/ng sucks'. Since I made that slide, it's actually got worse.
Why talk about this now?
What is the technology?
- you can have the speed benefits of using AJAX to load page content (since for many websites, only a fraction of the code delivered is actually content; most is just design & templating)
- since the page URL can accurately reflect the 'real' location of the page, you have no problem with people copy/pasting the URL from the address bar and linking to / sharing it (linking to a page that uses #fragment for the page location won't pass link-juice to the right page/content)
- with the #!s out of the way, you don't need to worry about special 'escaped URLs' for the search engines to visit
- you can rest easy, knowing that you are contributing good quality URLs (as discussed in the post montioned earlier) to the web.
I launched a pushState demo / example page to show how all this performs in practice.
Click the image above to visit the demo site in all its glory.
The Techie Bit
- Before doing anything else, make sure your site works without JS; Google will need to be able to follow your links and read content
- You'll also have to create server-side processes to serve just the 'content' for particular pages, rather than the fully rendered HTML page. This will depend a great deal on your server, your back-end set up; you can ask in the comments below if you have questions about this bit.
- Finally, get all the SEO benefits by using the pushState() function update the URL to match the content's 'real' location
Resources and Further Reading:
- Dive in HTML5: Chapter 11: Manipulating History for Fun & Profit (I'd highly recommended this whole book)
- Mozilla Developer Network: Manipulating the Browser History
- And finally, for your own peace-of-mind: Internet Explorer 10 will support the HistoryAPI
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!