Search engine marketing is necessary in digital marketing because when you are creating a website, the search engine will be looking for original pieces of work, and if you have that, your website may possibly become very popular to search engines.
An important part of SEO is making sure your website is distinctive for user's but also for the web crawlers. Even though a search engine can display all websites with links and keywords to the users search, it still may not be what the user is looking for. So, that's where SEO can help because it helps the search engines figure out what each page is about and how it is useful for the user.
SEO - good or bad?
People argue that search engines shouldn't require websites to follow rules and principles in order to be indexed. They argue that all relevant text should appear on searches and not just the ones that have been 'optimised' by unlicensed search marketing professionals.
However, search engines are trying to improve; they cannot figure out such specified details in searches that may or may not relevant. Searches such as 'my friend, a blonde haired, blue eyed girl' are not helpful to the search engine because it is difficult to get a search engine or crawler to understand such specifics in photographs. Luckily, SEO is a practise allows webmasters to provide clues for the search engine to then understand the content. Adding structure to your content is essential to SEO, for it to be able to read your search request accurately and efficiently.
In my opinion, SEO is a useful and essential practise in order for the search engine to notice your website, taking into account the limitations of search engines gives you access to alter your web content in a way for the search engines to understand.
The most common faults in inclusion and rankings are:
Issues crawling & indexing
- Online forms: Any websites with logins or any other online forms don't really appear on search engines, because they don't fully complete them.
- Duplicate pages: Creating or copying information from other websites means that your website will be lost in rankings as it's just a duplicate.
- Blocked in the code: When the crawler occurs errors in the directives, this could possibly lead to the whole search engine shutting down.
- Poor link structures: If the search engines are finding it difficult to read the website's link structure, the search engines' index may deem it as unimportant and will then be minimally exposed in the search.
- Non-text content: Search engines still find it difficult to read bigger documents such as: Audio, photos, images, video and plug-in content.
Problems matching queries to content
- Uncommon terms: Different people have different terms for objects which can cause problems for the search engine. For example, if someone is looking to find a fridge but they call it a 'refrigerator'.
- Language and internalisation subtleties: The search engine may be confused by different people's spellings, for example, 'Colour' and 'Color'.
- Incongruous location targeting: When you are targeting content in English when the majority of the visitors on your website are from France.
- Mixed contextual signals: When the title of a blog or any text is misleading to the body of the text.
Make sure that if you create a website you market it accordingly- the search engines themselves have no idea of how good the content quality of your website is. The search engines base their results on what is popular (what majorities of people are looking at currently).
In order to get better engine rankings you should use HTML files as much as possible, since that is the file format that search engines read most efficiently. If your website needs more advanced methods, they are able to access them for visual display styles. For example:
- Instead of Java plug-ins or Flash use text on the page.
- Give alt text for images- Format the images you would like to use to jpg, gif or png in HTML to give search engines a text description of the visual image to ultimately make it easier for the engine to read.
- Instead of search boxes use navigation and crawlable links for the search engine to read easily.
- Provide a transcript for video and audio content if the words and phrases used are meant to be indexed by the engines.
Proof-reading your website is always essential as it will determine if you can be successful on a search engine or not. Websites such as:
- Google's Cache
These are helpful because you are able to visualise your website and see which factors of your content could be indexable or not to the search engines.
Crawlable link structures
Even though search engines need an index to find all relevant websites for the search user, there also needs to be a link for it to work in the first place. A crawlable link structure is a structure that allows the crawlers to browse the pathways of a website and it is essential for them to find all of the pages on the website. Tonnes of sites make the mistake of not making their navigation easy to access for the search engine, ultimately making it difficult for them to be found on the web. There could be really beneficial pages on the website, but for numerous reasons or mistakes the crawler cannot access them. The most common faults why pages may not be reachable are:
- Robots don't use search forms- Crawlers don't use the websites search bar to find content, making it an occurring problem as millions of pages are made to be blocked until the crawled page links it to them.
- Submission-required forms- Search engines do not complete an online form before being able to access certain content, making it ultimately invisible to visitors on websites.
- Frames or iframes- Both frames and iframes links are accessible for the search engines' crawler. However, its structure can be complicated for the engines in terms of the websites organisational skills.
- Links in Flash, Java and other Plug-ins- The search engine's crawler cannot access these types of formats yet through the site's link structure, making it hidden for user's search queries.
- Links on pages with many many links- Search engines have a limited access of links per page. This is necessary to cut down on spam and conserve rankings. Pages with too many links are less likely to be crawled and indexed.
No follow links
There are many popular search engines such as Google, Bing and Yahoo! that have stated that they do not include nofollow links in their link graph. Nofollow links ultimately stop the search engine from using them in their target links from their overall web. Nofollow links are literal and indicate a search engine to not follow a link, even though on occasion they do anyway. The nofollow link was made to reduce spam or automated messages, but over the years it has developed to alert search engines to not follow the link on the website.
Nofollow links are not a bad thing to have on your website. Nofollow links do not hold much value, but are natural in link profiling. A website with a lot of inbound links will accumulate a follow-up of a lot of nofollow links. High ranking sites tend to have more nofollow links than lower ranking sites.
Keyword and usage targeting
Keywords are one of the most essential parts of the search process because they are what links the user's query to the website. As the search engines crawl and index, they highlight the keywords you have used as a reminder for what they are mainly looking for, which is the basis of the search- the keywords you type in on the search engine. If you have made a website, make sure the keywords that you want to be crawled are linked as the crawlable content of your document.
Keywords for our search dominate the search, just how they are placed can completely change a search for example, the punctuation, grammar, placement all provide key information for the search engines crawlers to get the most accurate and relevant results possible.
Unfortunately, keyword abuse is a common attribute to search engines today. The whole purpose of using keywords is not just to rank highly on every keyword your website is linked with, but to rank highly for the specific keywords for the people who are searching and want what your site is offering. Many webmasters abuse the keywords in search engines to manipulate them. When search engines first started, they relied on keywords to provide the crawlers to search, but today, even if search engines cannot understand the text they are looking for, they are learning to comprehend the most ideal.
The keyword density myth
Did you know? Keyword density is not part of the modern ranking algorithms- a keyword density analyser evidently doesn't understand which document is more important. Here is a list of all the things a density analysis ratio tells us nothing about:
- The proximity of keywords in documents
- Where the terms appear in a document
- The co-citation frequency between terms
- The main heading, sub-heading and theme of the documents
Overall, keyword density has nothing to do with the content, quality, semantics and relevance to the documents.
Targeting and keyword usage are still a really important and essential element for the search engines' ranking algorithms. The keyword for search engine marketing is a phrase that should be used:
- In the title tag at least once, mainly aim to tag at the top of the page as a header.
- Around two to three times in the main body of the document.
- You don't really need to repeat the phrase as it has been shown to not particularly be effective.
- At least once in the alt attribute of an image on the site. This doesn't just improve with the web search, but it also helps the image search.
- Once in the URL.
- At least once in the meta description tag. This isn't used by the engines for rankings, but more improves to attract clicks by searchers reading on the results page- usually a snippet of text will be shown by the search engines.
Also, DO NOT use keywords in link anchor text pointing to other pages on your site; this is called Keyword Cannibalisation.
The title of a website page is generally a description of the pages content. It is important to have a title tag for both the user and the crawler, so the user and see what the page is about, and the crawler can use the page as a result for a potential search. Here are some tips for when creating a title tag, as it is such an essential part of optimisation:
- Be mindful of length: You ideally want your title to be short and snappy that gives a sense of mystery for the user, but also gives keywords for the search engine crawler. In the search results, they display only 65-75 characters in the description, following an ellipsis. However, if you're targeting several key words it would be wise to possibly go longer on your title tag.
- Move important keywords close to the front: The closer the keywords are in the front gives more chance for high rankings, meaning more users clicking on your page.
- Include branding: Using brand awareness on your tags can improve rankings and will create click-through rate for people who like and are familiar with the brand itself.
- Emotional impact: The title tag is the first thing the user is going to see of yours so make sure the interaction conveys the right message of your website/page. Creating an interesting and intriguing title will ultimately engage the reader.
Meta tagsA meta tag is a piece of code that describes some parts of the web page. It was originally intended as a proxy for information about a website's content. The meta tags can be used to control the search engine's crawler activity. Meta robots can manage how search engines use a page. Here are some different terms that give the engine instruction:
- Index/Noindex: This indicates to the engine whether or not the page should be crawled to be kept in the search engine's index or not. If you use noindex tags, this just indicates to the search engine that it needs to be deleted from the search index.
- Follow/Nofollow: This is sort of similar to index/noindex, but instead of deciding whether or not the page should be crawled, it decides if the link should be crawled or not. If you were going to use nofollow, the search engines would ignore the links on the page for discovery, making it harder on the ranks of the search results.
- Noarchive: This stops search engines from saving a cached copy of the page. The search engines automatically keep visible copies of all pages they have indexed, accessible to searchers through the cached link in the search results.
- Nosnippet: Something that tells the engines to stop showing the small text block next to the pages URL or page title.
- Noodp/Noydir: These are unique tags telling the engines to not use snippets of content from the Open Directory Project or the Yahoo! diretory to see in the search results.
The meta description tag is the small part of text that describes a page's content. This is used to advertise the website, to ultimately attract the reader's eye, which shows us how important the meta description is in search marketing. So, it is important that you make the meta description compelling and interesting for the reader to want to click on the page all in under around 160 characters. If you do not have a meta description, the search engine usually uses snippets of the keywords that link the user's search and the page.
Meta Keywords: These are a specific type of meta tag that appear in the HTML code of a web page to help the search engine know what the topic of the page is about. However, the meta keywords aren't used as much anymore due to the fact they aren't as needed for search engine optimisation.
Meta Refresh, Meta Revisit- after, Meta content-type, and others: Even though these are still used for search engine optimisation, once again, they aren't necessary to the SEO process any more.
A URL is a type of URI (Uniform Resource Identifier), which is basically the term for all types of names and address that refer back to the World Wide Web. URLs are seen usually in the web browser's address bar, which can impact the user experience if the structure of the websites' URL is poor. Most popular search engines include the URL in the search results so that they impact click-through and visibility, and usually if the URL is linked in with the search user's query, they can rank highly as a benefit.
URL construction guidelines
Here are some tips to create a professional URL:
- Empathise: Put yourself in the search user's mind, and then analyse your URL. Is it good enough that when you read it, you will have a good prediction of what the content of the website is? You don't have to give all the pages' details, but to give the user a good clue is helpful.
- Keep it short: Having a short URL makes it a lot easier to paste into other digital platforms, such as blogs, emails etc. Though having a descriptive URL may be important, it is equally as important to make sure it is fully visible to the user.
- Use of keyword: If your page is targeting a specific phrase, it is important to make this a keyword, making it visible for search engines as well as the user. But, be aware that you must not over use keywords; overuse will result in less usable URLs and can alert spam filters, making your websites unsuccessful to search engines.
- Go static: The best URLs don't have numbers, symbols or parameters because they are a more readable static version. Just a single dynamic parameter in a URL can result in lower overall ranking and indexing for your website.
- Hyphens: To separate words in a URL it is best to use hyphens, so all web applications can accurately read these separators.
Canonical and duplicate versions of content
Duplicate content: This is a big problem for any website. Search engines today are spotting and dismissing more duplicate content so that they only find original pages in the top results of user's searches.
Canonicalisation: This means you need to organise all of your original content into one URL, to prevent it being duplicated by other web-users. This happens when two or more duplicate versions of a web-page appear on different URLs, which means duplicate versions of websites can be shown multiple times in search results. This creates an issue for search engines because they don't know what webpage to show as a top result. So, they scan through the websites and choose the one that is most likely to be original.
The canonical tag: This is an alternative option to search engines as it decreases chances of duplicate pages on a single site and then canonicalise to a single URL. To use the canonical tag within the page that contains the duplicate content, target canonical tag points to the master URL that you want to rank for, this should get rid of all the duplicate content. Basically, with the canonical URL tag, you are telling the search engine that multiple pages should be considered as one.
Rich snippets are a type of structured data that provide a small sample of a site's content on the search engines result pages. They make it become more appealing to users and enhance your search listing. Rich snippets give web-masters an advantage because it allows them to have their website with potentially a 5 star ranking. Often the search engines include structured data in search results, such as in case of user reviews and author profiles.
Defending your sites honour
Many websites today only live and exist on other websites' pages and content, you could almost call it cheating. These sites are called 'scraping' and they usually perform well in search engine results too, often ranking higher than the original sites themselves. However, to defend your website, you can always make sure you link your website in your content; since the 'scrapers' do not usually edit the content. The search engines will recognise that all the content links back to your website, highlighting to them that the website was most likely the original. In order to do this you need to use absolute links in your internal linking structure. Making sure that it is linking back to your homepage.
featured image: www.verticalresponse.com