This week Search Engine Journal compiled a list of 36 insights from John Mueller that they collected throughout this year’s English Webmaster Hangouts. In case you don’t have time to read through or watch all 36 of them, we went ahead and selected five that we feel you should know about.
Mobile-First – Some Sites Have Not Moved to Mobile-First Index Simply Because Google Has Not Moved Them
Although an oversized proportion of websites have currently been moved to mobile-first indexing, some have not. John Mueller pointed out in one of the videos this year that it is not necessarily because there is something wrong with the site or because it is not mobile-friendly, so there is no need to worry about it right now.
Content – Google Will Index Duplicate Pages; However It Shows the Most Relevant One
For duplicate pages that exist across totally different sites with no canonicalization in place, Google can index each version of the page. However, it can only show the most relevant one within the search results based on relevancy and personalization factors, like a location.
For more information about ways, Google will decide which duplicate page to show, you may find this article useful from SEO by the Sea. It identifies the patents that factor into this and considers how it will actually work.
Crawling – Ensure That Scripts Do Not Cause the Page Header to Close Before It Actually Ends
Non-head components that are injected into the page header by scripts can cause Googlebot to consider the <head> has ended and will proceed onto the <body> content. This can cause Googlebot to miss any other header elements located below wherever those elements are inserted.
Mueller recommends that you check for this problem by using Google’s Rich Results tool and choosing to View Code.
Indexing – Linked Pages Blocked by Robots.txt Will Still be Indexed
Backlinks to a page that is deliberately blocked by robots.txt can still get indexed by Google. Mueller recommends exploiting the noindex tag for these pages instead.
Structured Data – Structured Data is Only Utilized by Google If a Website is Considered High Quality & Trustworthy
Because of the nature of rich results, Google needs to make sure that the content featured in it is trustworthy and of high quality. As a consequence of this, a website needs to meet a certain threshold of quality and trust before Google will allow it to appear in the results as anything other than a vanilla result.
Also, a site must make sure the structured data on a page is implemented correctly both technically and logically so it will display properly and be relevant.
Comments are closed.