Notes from the Google Webmaster Hangout on the 24th of August 2018.
Empty Sitemap Files Should Be Removed
Empty sitemap files with no URLs are not problematic for Google but John recommends removing them if they are not needed.
August 1st Update Did Not Specifically Target Medical Sites
The August 1st 2018 update was a general ranking update and did not specifically target medical sites.
JavaScript Injected Tags Should Not be Duplicated in Static HTML
Using JavaScript to modify canonical or robots meta tags can change the signal provided to Google when they come to process the rendered version of the page. If tags are injected using JavaScript, then John recommends not having them in the static HTML so that the signal provided to Google is clear.
Inconsistent HTTPS Migrations Cause Bigger Ranking Fluctuations
Google is more cautious with inconsistent HTTPS migrations that don’t map one to one from HTTP to HTTPS with clear 301 redirects. HTTPS migrations that also remove a lot of URLs or block URLs by robots.txt are likely to see bigger fluctuations in rankings.
Google Doesn’t See Embedded Images as Links to Another Site
Google don’t treat embedded images in the same way as a link to another website.
Include Date in Structured Data For Articles
Google recommend using dates directly in the structured data of articles as this makes it easier for Google to extract the correct date.
Links Can be Ignored Algorithmically and Manually by Google
Links can be ignored algorithmically and manually by the Web Spam Team when they are taking manual action on a site to block the site from passing PageRank.
Google Doesn’t Always Treat Singular & Plural Versions of Queries as Synonyms
Google doesn’t necessarily serve the same results for singular and plural versions of queries as the searcher intent might not be the same.
Partially Translated Pages May Rank if They Are the Best Matched Version
Google tries to show the best matched version of a page for users in the language and region that they’re searching for, even if that version is only partially translated.
Linking to Every Page From a Site’s Homepage Will Stop Google Understanding Site Architecture
Linking to every page on a site from the homepage means that Google will lose the structure of the website as a whole, unless the site only has a small number of pages.
Internally Duplicated Content Isn’t Penalised But Can Waste Crawl Budget
Google doesn’t penalise sites for duplicating content internally but it can waste crawl budget.
The Web Spam Team is Actively Working to Deal With Sites Manipulating Rankings With Expired Domains
The Web Spam Team is actively working to deal with sites looking to manipulate rankings using expired domains. John says that this isn’t a simple loophole that’s being exploited, but there may be cases where people are getting away with these spammy techniques.
Google Indirectly Interprets Charts & Graphs to Understand Context
Google doesn’t interpret charts and graphs to see if the numbers or information is useful and correct. However, indirect signals are collected (like text on the page, titles, descriptions, alt text etc.) to understand the context of the page.
Lazy-loading Images Can be Implemented Using Noscript Tags and Structured Data
For lazy-loading images, it is important for Google to be able to find the image source tag on the page. This can be implemented using the noscript tag or structured data, so even if Google doesn’t see the images when it renders the page, it knows that they’re associated with the page.