JavaScript & SEO

“Because of the lag between crawling and indexing on JavaScript websites, by the time the crawler finally gets around to crawling the URLs of deeper pages that have been discovered by the indexer, the crawl scheduler wants to go back to crawling the already known pages because their PR has already been calculated by the PageRanker, so they have a higher URL importance than the newly discovered URLs.”1

“The end result is a very low rate of indexing on the site. Googlebot does its best, but its own URL scheduling systems don’t allow it to spend crawl effort on deeper URLs that it doesn’t see as having any value.”1

Clearly, recommending JavaScript for client-side rendering of critical content on a web page cannot be done without extreme caution. Google’s own words support this.

Maybe Google will simply flip the switch, and concerns over JavaScript rendering will be something of the past. But if this Twitter thread on crawling and these computational costs of doing so are any indication, we should not hold our breath.

Dynamic rendering, or switching between client-side rendering and pre-rendering based on user-agent, and refreshing the cache as necessary based on the frequency of new or changing content, certainly seems like a decent implementation moving forward.

In the meantime, I appreciate the following tools for diagnosing any issues:

  • Chrome Web Developers plugin with JavaScript disabled
  • Screaming Frog text-only crawls
  • Google’s mobile friendliness tool and manual searches for crucial elements
  • View Rendered Source Chrome Extension

[1]https://www.stateofdigital.com/urls-crawling-pagerank-fundamentals/