Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Google Webmaster Hangout Notes: December 10th 2019

SEO and digital marketing industry news

Notes from the Google Webmaster Hangout on the 10th of December 2019.

google webmaster hangouts recaps - SEO tips

Return 404 or 410 Status Codes to Prevent Googlebot Processing Files from Hacked Domains

If you have a legacy hacked domain, the best way to prevent Google from crawling old URLs is to create an overwrite file in your htaccess that returns a 404 or 410 status code when the hacked URL is accessed by Googlebot. This will stop it from processing the files and making calls to the database.



Google May Crawl More Frequently if it Detects Site Structure Has Changed

If you remove a large number of URLs, causing Google to crawl a lot of 404 pages, it may take this as a signal that your site structure has changed. This may lead to Google crawling the site more frequently in order to understand the changes.



There is No Guarantee of Faster Results By Using 410 Status Codes

To remove a full section of a site from the index, it is best to include a 410 status code on the pages. Both 404 and 410 display different signals to Googlebot, with 410 being a clearer signal that the page has been removed. However, as Google encounters a large number of incorrect signals, Martin explained that they will use these status codes as hints, so it is not a guarantee that you will see faster results by using a 410.



Use Chrome DevTools and Google Testing Tools to Review a Page’s Shadow DOM

There are two ways to inspect a page’s shadow DOM in order to compare it to what Googlebot sees. The easiest way is by using the Chrome DevTools, within the inspector you will see # shadow route which you can expand, this will display what the shadow DOM contains. You can also use any of the testing tools and review the rendered DOM, this should contain what was originally in the shadow DOM.



There is No Risk of a Noindex Signal Being Transferred to the Target Canonical Page

If a page is marked as noindex and also has a canonical link to an indexable page, there is no risk of the noindex signal being transferred to the target canonical page.



Google Would View a Page Canonicalized to a Noindex URL as a Noindexed Page

If you have a canonical link pointing to a page that is noindexed, the page canonicalised to it would also be considered noindex. This is because Google would view it as a redirect to a noindex page and therefore drop it.



Be the First to Know About the Latest Insights From Google

Hangout notes

Loop me in!

Avatar image for Ruth Everett
Ruth Everett

Technical SEO

Ruth Everett is a data & insights manager at Code First Girls, and a former technical SEO analyst at Lumar. You'll most often find her helping clients improve their technical SEO, writing about all things SEO, and watching videos of dogs.


Get the best digital marketing & SEO insights, straight to your inbox