If you’re diving into the world of search engine optimization (SEO), you’ll likely be learning about all three pillars: on-page SEO, off-page SEO, and technical SEO. These are the three pillars of organic SEO that you should understand as a website owner or marketer.
This blog covers what is often perceived as the most challenging of the three: technical SEO. Continue reading for a step-by-step guide to technical SEO.
Technical SEO is the process of ensuring that a website meets the technical requirements of modern searching engines with the goal of improved organic rankings. The most important elements of technical SEO include crawling, indexing, rendering, and website architecture.
Technical SEO is important because Google (and other search engines) need to be able to find, crawl, render, and index the pages on your website. Even if have the best, most unique content on the web, you’ll likely suffer subpar rankings unless your technical SEO is in place.
If you’re wondering where to start when it comes to improving your technical SEO, keep reading. What follows is a basic technical SEO guide to help you improve your website’s technical optimization.
To give you a brief preview, here are some of the items we’ll need to cover:
Crawlability
Indexability
XML sitemaps
URL structure
Structured data
Page speed
Mobile-friendliness
Content duplication
If this is your first time doing a technical SEO audit, you may be a bit overwhelmed.
As noted above, technical SEO is the most complex of the three SEO pillars, and this often makes its seem not very approachable to newcomers.
In this next section, we’ll discuss a handful of simple steps you can take if you want to do a quick technical SEO audit of your website to see where you should start.
Crawlability is how well a search engine can access and crawl your site’s content without running into a broken link or dead end. Crawlability comes into play with keyword-targeted pages. While you can have as many keywords as you want, they won’t be able to do much if they aren’t crawlable because it makes it difficult for Google to crawl and index, and thus your page/site won’t rank well. Searchers won’t be able to find your page even when searching for keywords for which you have sufficient relevance on-page.
What you need to do first: choose and familiarize yourself with a website crawler (personal favorite is Screaming Frog), and set it loose on your website. Web crawlers like Screaming Frog scan your website’s code and crawls each and every HTTP status code, title tag, image, link, and text that it can see. Things you'll want to keep a close eye on – since they will impact how search engines crawl the site, too – are crawl errors (404s), redirects, links marked nofollow, etc.
A big factor in both crawlability and indexability (see next section) is a good site structure. Some basic tips for this include:
Your site hierarchy should cascade, topically moving from broad to more narrow.
After the Home page, move on to high-level category pages, sub-category pages, and individual pages.
Clear, top-down navigation makes it easier for both users and crawlers to navigate your website and get to the pages they seek.
Indexability is what SEOs use to measure Google’s ability to analyze and add your webpages to its index. This is a topic for which a whole standalone guide could (and soon, will) be dedicated.
To see which pages are currently indexed in Google, simply do a site-search in Google by entering "site:" followed by your domain in search. This is the first high-level overview of the site's indexation, to see if the # of results returned vary from the # listed in the XML Sitemaps.
If there are too many or too few URL indexed, then it's time to get to work.
💡 Golden rule of indexation: consistency of signals. Make sure your robots.txt file, XML Sitemap, rel="canonical" tags, meta robots tags, and internal linking strategy are all sending the same message to Googlebot about your webpages.
Ultimately, the indexability component of the audit should take stock and inventory of what is being indexed, what should be indexed but isn't, and what needs to be omitted from the index.
When it comes to SEO, your XML sitemap is a map of your website to Google and other search engine crawlers. It allows crawls to find and make your website pages. Additionally, it's one of the most important elements of influencing indexability.
An effective sitemap has the following features:
It should be formatted properly in an XML document.
It should follow XML sitemap protocol.
Each individual XML Sitemap can contain a maximum of 50,000 URLs (and must be no larger than 50 MB).
For larger sites, you need to break up large sitemaps into smaller sitemaps and use a Sitemap Index file to list all the individual sitemaps (think of this as a Sitemap for Sitemaps).
It should only include canonical versions of URLs.
It should not include noindex URLs.
It should include all new pages when you update or create them.
Your sitemap should include your most important pages, should be structured correctly, and should not include pages you don’t want Google to index.
Different CMSs offer a variety of tools to aid in the creation of XML Sitemaps; for example, Shopify will create and manage your XML Sitemap for you, while Wordpress has plugin options like the Yoast SEO plugin for configuring them.
URL structure is a key component of technical SEO.
Created strategically, URLs will help users — and Googlebot — to understand the hierarchy of and the relationship between all the content on your website.
An ideal URL structure means consistent, logical, and relevant in nature to what your website is about. And assuming your site is larger than say, a personal blog, URL structure is bolstered by putting your pages under their own directories (e.g. /category/shirts) to reinforce that structure.
While URLs are not the place to stuff keywords, they should nonetheless take your SEO strategy into account; Googlebot does refer to URLs (in some capacity) to deduce the relevance of the respective webpage.
Structured data, most commonly in the form of Schema.org markup, is code placed on your website, with the purpose of helping the search engines return more informative — and often, more eye-catching — results for users. Schema markup enables your content to be eligible to appear in various search engine “Features” like image packs, the Answer Box, etc.
This can be a very daunting task to undertake at first, but this is an arena where there are some crucial how-to guides and resources on the web. Here are just some of the most helpful:
Types of Schema in SEO by seoClarity
Page speed is an enormous factor in not only how users react to your page but also how your page ranks. You should make having your page fast, responsive, and user-friendly a top priority. Consider assessing your page’s speed with the following tools.
Google PageSpeed Insights
Google Analytics Page Timing report
GTmetrix
Mobile-friendly sites are more important now than ever before as 52 percent of global internet traffic comes from mobile devices. As a result, Google announced in July 2019 that it was rolling out the mobile-first indexing, which meant that they would use the mobile versions of your website for ranking and indexing.
So, with this, make sure you’re prioritizing the mobile version of your website. You can use Google’s free Mobile-Friendly Test to check if your page is responsive and easy to use from a mobile device.
Review your website’s content and make sure you’ve avoided duplicating information. For top technical SEO, you’ll want to avoid duplicate content because Google doesn’t like copies of the same information. Why?
Duplicate content serves little original purpose to the end user.
Google struggles to understand which page to rank in SERPs.
Duplicate pages can sometimes serve as one of your competitor pages.
To check for duplicate content using Google search parameters, enter "info: www.your-domain-name.com."
Go to the last page of the search results. If you have duplicate content, then you may see the following message, “In order to show you the most relevant results, we have omitted some entries very similar to the X number already displayed. If you like, you can repeat the search with the omitted results included.”
If you have duplicate content, then you should run a website crawl and sort by page title to see if there are any duplicate pages on your site.
To clarify more on an oft-used, erroneous phrase: there is no actual “duplicate content penalty,” per se – though the phrase still gets used regularly by SEOs. But that doesn’t mean it can’t hurt SEO; the logical end result of having duplicate content on your website is that Googlebot won’t know which webpage is the best result: yours, or the original. And suffice to say, Google by now has gotten pretty good at determining the imposter.
In this technical SEO guide, we’ve covered the basics you need for an initial website audit. These eight simple steps can be a great start for any website owner or marketer looking to boost their technical SEO and to work effectively for search engines.