In a recent episode of Google's Search Off The Record podcast, team members put Gemini into action to create SEO-related content.
But their experiment raised concerns about factual inaccuracies when relying on AI tools without proper vetting.
During the discussion, Lizzi Harvey, Gary Illyes, and John Mueller took turns using Gemini to write sample social media posts about technical SEO concepts.
While analyzing Gemini's output, Illyes highlighted limitations common to all AI tools.
“My big problem with almost all generative AI is that it's factual. You always have to fact-check what the AI spews out. That's in a sense, if we're just reading it live now. It's scary to think that I might end up saying something that isn't true.”
Outdated SEO advice exposed
The concerns stem from suggestions of using AI-generated tweets. rel=”previous/next” For pagination – a technology deprecated by Google.
Gemini suggested publishing the following tweet.
“Is pagination causing duplicate content headaches? Use rel=prev, rel=next to guide Google through your content sequence. #TechnicalSEO, #GoogleSearch.”
Harvey was quick to point out that that advice was outdated. Mueller confirms that rel=prev and rel=next are not yet supported.
“It's gone. It's gone. Well, it means you can still use it. There's no need to get rid of it. It's just ignored.”
Earlier in the podcast, Harvey warned that outdated training data information can lead to inaccuracies.
Harvey said:
“Even if there are enough myths circulating, or certain ideas about something, or even outdated information;
This has been written about many times on the blog, so maybe it will be covered in today's exercise. ”
Sure enough, it only took a short time for the old information to emerge.
Human oversight remains important
Google search teams recognized the potential of AI-generated content, but discussions emphasized the need for human fact-checking.
Illies’ concerns reflect a broader debate about responsible AI adoption. Human oversight is necessary to prevent the spread of misinformation.
As the use of generative AI increases, remember that its output cannot be trusted blindly without verification by subject matter experts.
Why SEJ is paying attention
As Google's own team has shown, AI-powered tools can help with content creation and analysis, but a healthy degree of skepticism is warranted.
Blindly deploying generative AI to create content can expose outdated or harmful information that can negatively impact your SEO and reputation.
Listen to the full podcast episode below.
FAQ
How does inaccurate content generated by AI impact your SEO efforts?
Using AI-generated content on your website can be risky for SEO, as AI can contain outdated or inaccurate information.
Search engines like Google prefer high-quality, accurate content, so publishing unverified AI-generated material can negatively impact your website's search rankings. For example, if AI encourages outdated practices like using rel=”prev/next” tags for pagination, it can mislead viewers and search engines and undermine your site's credibility and authority.
To ensure that AI-generated content follows current best practices, it is important to carefully fact-check and verify AI-generated content with experts.
How can SEO and content marketers ensure the accuracy of the output generated by AI?
To ensure the accuracy of AI-generated content, companies must:
- Conduct a thorough review process involving subject matter experts
- Have experts check your content to ensure it follows current guidelines and industry best practices
- Fact-check data and recommendations from AI against trusted sources
- Stay informed of the latest developments to identify outdated information generated by AI.
Featured image: Screenshot from YouTube.com/GoogleSearchCentral, April 2024.