Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Google Webmaster Hangout Notes: 27th October 2015

google webmaster hangouts recaps - SEO tips

Our notes with times on the Google Webmaster Hangout with John Mueller on 27th October 2015. This time, John discussed many points about JavaScript and rendering, commented on old messages being shown to newly-verified users in Search Console and stated that Google does use anchor text on internal links to understand the content.

 

Google Search Console may show old messages to newly-verified users

07:22: When a new user is verified on a domain, Search Console may show that user old messages from a time period that Google thinks makes sense. Which messages it would show is unclear. John stated:

“It depends on the message. For some messages we had them essentially set up so new owners also see them for a specific timeframe, and other types of messages are just for the current owners.

“So you could imagine that if something is broken with crawling and indexing and nobody had the site verified and someone starts to verify it, than you might want to see that backlog of messages there. But that wouldn’t make sense for all types of messages.

“I don’t know if we have clear separation there that we ever communicated.”

 

Google’s rendering of JavaScript HTML pages is ‘incomplete’

22:12: In response to the question: “Will the retirement of _escaped_fragment_ be accompanied by better Angular crawling? Six months ago our Angular site displayed correctly in the “View as Googlebot” tool – but was not crawled properly. Has crawling improved? If not, what should we do?”

John said that, unrelated to the AJAX crawling announcement, Google are always improving their capability for rendering JavaScript pages, to help Googlebot understand them better. Rendering is currently ‘incomplete’ and pages need to be tested very carefully with the Fetch and Render tool in Search Console.

He also explained that rendering will improve though, so this won’t be a long-term problem.

Russell Middleton said his site had dropped out of the rankings despite the render appearing to work. They reverted to a pre-render solution, which was far more successful (although this means they are very reliant on Google supporting pre-rendering). Russell asked how soon Google would drop its support for pre-rendering, to which John said that Google would continue to support pre-rendering for ‘a while’.

 

Lots of structured data markup won’t have a (direct) negative SEO effect

40:50: You can’t have too much structured data (ie. there is no penalty for “too much” markup), but by adding lots of it you will add weight to the page, and make it more difficult to maintain the page or keep the HTML clean. There might also not be any SEO value to get from adding lots of markup, so this weight and extra maintenance might be unnecessary.

To use John’s extreme example, it is possible to add markup to every word on a page, but there would be no SEO value in doing that. John recommended identifying the markup that will bring most value to the site, and focus your efforts on that.

 

Serving additional localized content via JavaScript is not considered cloaking

42:20: In response to the question: “If we were to serve up additional localized content by means of JavaScript once a page has loaded depending on user’s location, is that going to be cloaking because normally the page won’t have this additional information?”

John stated that this wouldn’t be considered cloaking, but the additional information probably won’t be indexed as Google will just pick one version and index that. John recommended using separate URLs for translated content so that it can be indexed separately.

 

Google uses anchor text on internal links to understand the content

45:07: Google does use the anchor text on internal links to understand content, but links are also treated as text on the page so purposefully putting keywords into them could lead to lots of pages that look like they are keyword-stuffed. John recommended linking naturally to get the SEO benefits.

 

404s better than a noindex on expired/removed pages

48:11: For expired/removed content, John says that Google prefer a 404 as it results in less crawling than a noindex.

 

Valid HTML not a ranking factor

50:55: If a page works in the browser, and Google can understand the content, then it’s OK for SEO. Ideally everything on the page would be valid so that Google can understand the content, but they understand that ‘in the real world most pages aren’t valid HTML and there’s always something broken on the page’, so Googlebot has to be able to deal with that.

Therefore it’s not the case that a page that has some invalid HTML will be dropped completely; Google will still try to understand what’s on the page and how they can rank it.

 

Duplicate content: noindex and canonical tags

64:30: If you have duplicate pages with the same content that aren’t canonicalized, but one of the pages has a noindex, then Google might pick the noindex version, and then that page will be noindexed even if there is an indexable duplicate.

 

JSON-LD support limited at the moment

01:05:25: Google only offers limited support of JSON-LD markup at the moment but John says it is growing. He stated:

“One of the aspects here is that JSON-LD is pretty new and we do support it for some things but not other things. I don’t know what other search engines are doing with it yet, but it is something that’s being picked up more and more.

“In general when you add structured data markup to your site, I’d try to make sure that you’re looking at it holistically, in the sense that you’re not just focusing on something that’s purely for Google but think about the long-run on how you want to maintain this [and] where else do you want to use this markup, and make sure you use one of the formats that actually works for you in the way that you want. That’s kind of the reason why we support those different types of markup as well.”

 

JavaScript rendering improvements are coming

01:07:48: John says that Google are working on an update of their rendering engine, so this will improve over time.

For the moment, John recommends experimenting to understand which JavaScript elements are getting stuck: “There might be elements that you’re calling that aren’t supported by Googlebot at the moment, and if you can instrument your code in a way that you can catch these errors then you’ll probably be able to figure [it] out.”

He did admit that it can be very difficult to find which elements are getting stuck, so suggests pre-rendering as a temporary solution until Googlebot is able to pick it up properly if you really can’t find the problem.

 

Fetch and Render views might not be consistent if JS/CSS is disallowed

01:10:26: The Fetch and Render tool shows you two different renders, one for Googlebot, which uses the Googlebot user agent, and one for users, which uses a browser user agent.

If JS/CSS is disallowed for Googlebot, it may not be able to render all the content in the same way.

Avatar image for Tristan Pirouz
Tristan Pirouz

Marketing Strategist

Tristan is an SEO enthusiast, strategist, and the former Head of Marketing at Lumar.

Newsletter

Get the best digital marketing & SEO insights, straight to your inbox