Without strong technical optimization, the rest of your search engine optimization (SEO) is meaningless. Unless search bots can crawl and index your site, they can't rank your content in search results.
Understanding technical SEO issues requires more than just running a site audit with Semrush or crawling with Screaming Frog's SEO Spider. While these tools are useful for auditing your site's technical SEO, you need to dig deeper to find issues that truly affect your site's ability to be crawled and indexed. As you use these tools and the many others that can help diagnose technical SEO issues, make sure to audit your site for these 7 technical issues that can sabotage your SEO efforts:
Blocked crawl path
1. Robots.txt file
The robots.txt file is the ultimate gateway. It's the first place ethical bots hit when crawling your site. A simple text file, the robots.txt file contains allow and disallow commands that tell search engines what to crawl and what to ignore. If you make a mistake in your robots.txt file, you can accidentally disallow your entire site, instructing bots not to crawl anything. To check for site-wide disallows, check your robots.txt file. https://www.yoursite.com/robots.txt — Please note the following:
User-Agent: *
not allowed: /
To find other errors in your robots.txt file, use a checker such as Merkele's Validator. After you enter a URL, the validator will display your robots.txt file and tell you whether the URL you entered is allowed or disallowed.
Indexing Blocks
2. Meta Robots Noindex Tag
This meta tag at the top of the rendered HTML of your page tells search engines not to index that page.
The only way to find pages blocked by meta robots noindex tags is to run a crawl of your site using a crawler like Screaming Frog's SEO Spider, or by going to the Pages report in Google Search Console and clicking on the “Excluded by 'noindex' Tag” report. Note that Google Search Console only shows a sample of 1,000 affected pages, so for larger sites this method may not provide a complete analysis of the issue.
3. Canonical Tags
Canonical tags, which Google considers as suggestions, are intended to identify which pages to index among a set of duplicate pages. By selecting a canonical tag for each page, you can ask Google not to index URLs that contain tracking parameters, etc. The canonical tag is located at the top of the rendered HTML code of a page and looks like this:
However, sometimes the canonical tag is incorrect and asks Google not to index something that you actually want in the index. The best way to identify incorrectly canonicalized URLs is to run a crawl of your site with a crawler such as Screaming Frog's SEO Spider and compare the canonical URLs to the crawled URLs. You can also check the Google Search Console reports for “Alternate pages with proper canonical tags” and “User-selected duplicates without canonicality.”
Missing Links
4. Malformed or Missing Anchor Tags
There are many ways to code a link that a browser can follow, but there is only one method that Google says it will definitely follow: using an anchor tag with an href attribute and a qualified URL. For example, a crawlable URL looks like this:
Here are some examples of URLs that Google will not or cannot crawl:
To see if the links are coded using anchor tags, href attributes, and qualified URLs, you need to view the source code of the page.
5. Pages orphaned by malicious links
Any page you want to rank needs to have crawlable links. Simply having a URL in your XML sitemap can potentially get it indexed, but if there are no crawlable URLs on your site pointing to that page, it's much less likely to be ranked. Also, as mentioned above, that URL needs to have an anchor tag with an href attribute and a qualified URL. If bad links are the only way to get to your content pages, Google won't be able to follow links to them, so they'll appear orphaned.
For example, e-commerce category pages or blog paginations are often uncrawlable due to bad links, meta robots noindex tags, canonical tags, robots.txt disallow commands, or rarely all of the above. Missing crawlable links combined with signals not to crawl or index content can make product or blog pages appear orphaned if they are only linked to from paginated pages. To diagnose this issue, first analyze the links in the pagination, then look for additional signals listed on the second page of the pagination set.
Mobile SEO Issues
On July 5, 2024, Google will complete the mobile-first indexing plan it started in 2016 and will only crawl sites using the Googlebot smartphone user-agent. If your site doesn't load on a mobile device, Google won't crawl or index it. Even if it does load, the mobile versions of many sites may not have as many features as the desktop versions, which can be problematic for technical SEO.
6. Mobile Navigation Issues
Navigation is a great way to pass link authority because every page on your site links to every page that is linked in the header and footer navigation. This means that every page in your header and footer has a bit more relevance and link authority because it is present in the site-wide navigation elements. However, mobile navigation is different from desktop navigation and often has fewer links for “ease of use” reasons.
For example, fashion e-commerce sites like LL Bean have rollover navigation four levels deep. They do it right: from any page on both desktop and mobile, you can navigate to four levels of the site's navigation, such as Clothing > Women's > Shirts & Tops > Flannel Shirts. If you compare the navigation on a mobile device to the desktop navigation, you'll see that it contains the same links to the same content.
Analyze your own site's navigation as well: if you don't have consistent or better navigation on mobile that links to the same or greater depth within your site, Googlebot won't be able to pass internal link authority as deeply into your site as it would if it were crawling via your desktop site, reducing the visibility of your lower-level pages and their chances of ranking.
7. Mobile content issues
Just as mobile versions of sites have fewer navigation elements, designers tend to omit content elements from the mobile version of their sites to streamline the experience for mobile users. However, because Googlebot currently crawls only as a smartphone, not all content available on desktop is seen and indexed by Google. Analyze your site's pages on a mobile device to make sure the same content elements are available on mobile as on desktop.
While there are many other technical issues that can arise with SEO, these are the ones I see most often that have the most detrimental effect on organic search performance: Audit your site for these 7 SEO issues today, and you can be sure that all of the content you want to crawl and index is doing so, and that your site is ready to rank and drive traffic to you.