A successful SEO strategy is key to any web page achieving top spots in search results, a process that can be even trickier to navigate thanks to puzzling jargon. This language barrier can make communicating with developers and agencies difficult from the outset and can hinder your search optimisation strategy from start to finish. If you’re looking to optimise your website and be visible in search results but are struggling to even get your SEO off the ground then it’s time to stop letting jargon stand in the way. The list below is by no means conclusive but should be more than enough to clear the air and take some of the confusion out of SEO.
A 301 redirect can be applied whenever you need to change the web address of a page. It will permanently redirect visitors to the new address, ensuring that any links from other sites to the old address remain active and search engines can update their index.
This is a program that search engines use to determine what appears in the search results for each search query and how the results should rank. Google is often changing and updating their algorithm, which causes fluctuations in the search result rankings and makes optimisation for search a constantly evolving process.
The anchor text is the clickable text of a link and helps search engines like Google to determine the authority of the link. The anchor text indicates the relevancy of the referring site and the link to the landing page; ideally, there will be the same or similar keywords used for all three.
Also known as trust, SEO authority refers to how authoritative a search engine deems a site to be for a search query. Websites with high authority e.g. BBC will rank better for their chosen keywords. Links are a key determinant of authority, in terms of relevance, volume and particularly quality. A trusted site with plenty of quality internal and backlinks means more than lots of irrelevant or poor-quality sites linking to your landing page.
These are simply any links into a landing page or website from an external page or website.
Black hat practices are SEO tactics that sit outside of standard best practices. They use malicious or disreputable techniques such as spamming and link farms to try and improve their rankings.
Also known as robot, spider, crawler. This is an autonomous program that finds and indexes web pages for search engines.
Canonical issues (duplicate content)
The canonical URL is the official or legitimate version of content and is the best address for users to find the right information. Canonical issues, therefore, are problems with duplicate content. This is nearly impossible to avoid as some page content will naturally be applicable at multiple addresses and search engines will identify some closely similar URLs as duplicates. For example, www.website.com, website.com, and www.website.com/index.htm. You can deal with these issues by specifying the canonical URL and using the noindex meta tag for non-canonical copies, as well as using 301 redirects to the canon.
Index, indexed pages
An index is a database of web pages used by search engines when serving search results, and their crawl programs index new web pages to add to this database. Indexed pages are the pages of a website that have been added to this database to appear in search results.
This is when the same keyword is used repeatedly across more than one page on your website. When this happens search engines struggle to determine which is the best page to serve for search queries using this keyword, and users can also become confused about where best to find the right information.
This is when the same keyword is used repeatedly throughout a piece of content or on a page, either by using it excessively in text and rendering the content difficult to read or by other duplicitous methods such as listing keywords in the footer of a page.
Latent semantic indexing (LSI)
This describes the way that closely related words or words that are commonly used in association with each other are indexed by search engines.
This is a web page that has been designed expressly with the purpose of gaining back links. This often includes informative blog posts with unique research that is promoted heavily via social media and other outreach.
This is the activity of gaining more back links to a webpage. This can be done with strategies such as link bait content, affiliate programs and other outreach and relationship building.
Long tail keyword
Long tail keywords are search queries with a longer string of words as opposed to one-word search queries. They make up most search queries and are more specific and often less competitive than shorter keywords.
A meta tag is the descriptive information of a web page that sits in the page’s source. This isn’t visible on the frontend, but is used by search engine crawlers and will appear in search results. This information sits within the HEAD section of an HTML page.
Meta title is the title of the page. This title is used to name a tab in your browser and appears as the title/headline of a result in the SERPs.
Meta description is the description of the page that can contain details and keywords that accurately describe the content. It helps crawlers when indexing the page and may appear as the description beneath the title on a search results page. It should, therefore, be appealing and descriptive enough for users.
Meta keywords are simply the keywords you would like associated with your page to help it appear for the right search queries.
This is an identical copy of an existing site at a different address. It is usually used when there is too much traffic for one server and can also improve site speed, for example having separate sites for the US and the UK.
This command can be used to tell bots not to follow the specific link or even all the links on a page. It ensures that no SEO credit is given to the site that is being linked to. Internal links should never use this command but can be used on external links for sites that you don’t wish to endorse.
This is another command that tells bots not to index the page or a specific link.
A term used by some SEO service providers, it is often part of their sales process whereby they make false claims to having unique techniques that can guarantee top ten rankings for example. It’s essentially a load of rubbish.
Also known as a link exchange or link partner, this is when two sites link to each other through mutual benefit. While these can still be valuable back links, often search engines won’t assign these links as much as authority as others that might be more organic and therefore trustworthy.
This is information that is sent from a user’s browser as navigate pages on the web. This can help inform webmasters about how users are finding certain pages based on information from their previous pages.
This is a file that gives instructions to web bots in the root directory of a website to help control their behaviour. This can include making sure certain sections apply to all bots or telling them not to index a specific page on a website.
This often uses automated bots and is the act of copying content from another page or site.
This simply stands for Search Engine Marketing.
SERP simply stands for search engine results page.
A spider trap is when a bot becomes trapped in an intentional or unintentional infinite loop of web pages. This lowers the productivity of the spider and waste resources and can eventually lead the crawler to crash. This can happen accidentally, for example with a web calendar that uses dynamic pages with links that continually lead to the next day or year. It is often an intentional way of trapping spambots. Non-hostile search bots will usually switch requests between different hosts rather than repeatedly from the same server to avoid traps like this, and a robots.txt file can be used to tell these bots to avoid these pages, leaving the spider trap to catch spambots.
A splash page or splash screen is often used as an introduction to a website or the main content of a page. It features images and graphics that are usually animated with little or no textual content. They can entice users to explore the rest of the site or content, but may have little SEO value and can be a dead end of crawlers that need text links to navigate. Poorly designed splash pages can be an unnecessary barrier to content and a pain for users as well as search engines.
This term is used in SEO and UX development and is essentially the opposite of bounce rate. Experimenting with ways to get users to stay longer and engage more with a page or website is working to improve its stickiness.
This refers to a group of pages that link to each other but are not linked to by any other source. This means they will rank poorly and have little authority, but will still be indexed if they are included in a site map.
White hat SEO
The other side of black hat SEO, white hat SEO is approved tactics that work within the best practices set by search engines like Google.
If you're still struggling with all the SEO jargon, why not call one of our team and find out how we could take the confusion out of SEO and help more customers find your business in the search results.