It occurred to me that Hyun-Jin Kim, Google's VP of Search,'s emphatic comments at SMX Next in November 2022 had passed without much discussion in the SEO community until today.
He said (emphasis mine):
“The EAT is a template for how to evaluate individual sites. Do it for every query and every result. It permeates everything we do. ” he added, “EAT is the core part of the indicator.”
EAT, and subsequently EEAT, are always discussed by SEOs. Most people are quick to say that they are not part of Google's ranking system. A Google spokesperson also confirmed these statements. These are quality concepts that are conveyed to human quality raters, whose reports are used to ensure that the ranking system is providing the best results in her SERPs. Raters will be given a copy of the Search Quality Rater Guidelines.
I have shared the SMX Next quote in about 5 forums and chat groups. Each has a potential audience of hundreds to thousands of people. I noticed his second quote about EAT being a core part of Google's metrics.
If it's not part of the ranking system, how can it be applied to “all queries and all results”?
I argued that it must have undergone a quality assurance process rear SERPs are distributed. The process looks like this:
- The AI process examines each page in the index for evidence of expertise, authority, and trustworthiness. Perhaps experience points have been added by now.
- This evaluation is performed continuously as we crawl your site and other sites that cite or link to it.
- Each element is given a numerical score, which can change with each crawl.
- Each element in the SERP (snippets, carousel images, URL results) is given such a score. This should have a high value compared to the results that follow it and subsequent “pages” in the continuous scroll.
- Since the SERP results are clearly selected by another ranking system, I'm guessing that EEAT is acting as a quality check after the fact.
- Therefore, your SERP delivery will not be delayed. If a negative trend is observed, we will analyze it in detail and modify the ranking system or fine-tune the EEAT coefficient.
I may be completely off base, but interpreting the words of Google researchers is not the purpose of this article.
Of the hundreds of SEOs who may have noticed my discussion invitation, about 5 responded seriously. They are old friends and I have met the three of them in person many times. Other reactions included weak humor, sarcasm, or skepticism about Google's statements. Then, crickets.
Where is your SEO curiosity heading?
Despite recent attempts to bring this topic up, I'm shocked that the Googler's unusual remarks didn't lead to further discussion.
What happened to the legendary SEO Curiosity who guessed the “200 Ranking Factors”? Multiple authors survey the SEO community to find and rank the key ranking factors. We liked adding our observations to the knowledge pool.
A lot of energy and curiosity goes into building great tools using Python, especially taking full advantage of AI. Part of that effort appears to be reinventing the wheel.
SEO tools are a hot topic of discussion every day. Is there a better tool for keyword research, as marketers would have us believe? Can AI writing tools really benefit all his SEO niches? ?
There is no shortage of self-proclaimed experts growing their mailing lists by encouraging us to steal their “secrets.” A lot of misinformation is being passed on as fact.
We have lost the early explorers who analyzed every search engine patent and tried to correlate it with SERP observations. I miss the pioneers like Ted Ulle and Bill Slawski who were analyzing algorithm updates and trying to pinpoint possible ways to avoid getting caught up in Google's web.
Get the daily newsletter search that marketers rely on.
Be more curious
My curiosity about SEO hasn't completely disappeared. Taking the example of EEAT, many argue that these factors are not part of Google's ranking system. It's okay to have a healthy skepticism of what search engine spokespeople put out.
Channel your curiosity into research. You may not want to share your results if they don't make sense. Again, we saw Cyrus Shepard researching his 50 sites to see the correlation between the features they have on their websites and the winners and losers of his Google algorithm and his updates.
Shepherd has discovered that one of the characteristics of “winning” websites is “experience.” But haven't SEOs kept repeating that EEAT is not part of the ranking algorithm?
It may not be a direct method, but any algorithm that looks at experience sends positive signals to ranking algorithms. Since there are relatively few pages of product or place reviews, it makes sense to keep the experience algorithm separate from the ranking algorithm.
I had the pleasure of watching curious SEO Daniel K. Cheung build a matrix of EEAT attributes for auditing pages. So far, we have found that in order for some attributes to have a greater impact on the page than others, we need to give each attribute a numerical value.
For example, an attribute could be the presence of a video of the author using the product being reviewed. It may have a bigger impact than a still image of the same scene. It doesn't matter if the actual method Google uses is much more subtle. Such curiosity gives us ideas to test.
Be skeptical
Some may argue that Shepherd's sample of 50 is not large enough. fair enough. One of the leading SEO tool manufacturers might have their crawlers examine his million websites and tell us whether we agree with his ideas or not.
There's no need to wait for a tool company to conduct an investigation. Choose over 100 sites to run your own tests. Repeat the rinse until you are ready to present your results.
The opinions expressed in this article are those of the guest author and not necessarily those of Search Engine Land. Staff authors are listed here.