Web Design, Mobile Apps, SEM, SEO, SES

Archive for the ‘The Beginner’s Guide to SEO’ Category

The Beginner’s Guide to SEO: Chapter 3 – Why Search Engine Marketing is Necessary

Search Engine Optimization is the process of taking a page built by humans and making it easily consumable for both other humans and for search engine robots. This section details some of the compromises you will need to make in order to satisfy these two very important kinds of user.

One of the most common issues we hear from folks on both the business and technology sides of a company goes something like this:

“No smart engineer would ever build a search engine that requires websites to follow certain rules or principles in order to be ranked or indexed. Anyone with half a brain would want a system that can crawl through any architecture, parse any amount of complex or imperfect code and still find a way to return the best and most relevant results, not the ones that have been “optimized” by unlicensed search marketing experts.”

Sounds Brutal…

Initially, this argument can seem like a tough obstacle to overcome, but the more you’re able to explain details and examine the inner-workings of the engines, the less powerful this argument becomes.

3 Limitations of Search Engine Technology

The major search engines all operate on the same principles, as explained in Chapter 1. Automated search bots crawl the web, following links and indexing content in massive databases. But, modern search technology is not all-powerful. There are technical limitations of all kinds that can cause immense problems in both inclusion and rankings. We’ve enumerated some of the most common of these below:

1. SPIDERING AND INDEXING PROBLEMS

  • Search engines cannot fill out online forms, and thus any content contained behind them will remain hidden.
  • Poor link structures can lead to search engines failing to reach all of the content contained on a website, or allow them to spider it, but leave it so minimally exposed that it’s deemed “unimportant” by the engines’ index.
  • Web pages that use Flash, frames, Java applets, plug-in content, audio files & video have content that search engines cannot access.

Interpreting Non-Text Content

  • Text that is not in HTML format in the parse-able code of a web page is inherently invisible to search engines.
  • This can include text in Flash files, images, photos, video, audio & plug-in content.

2. CONTENT TO QUERY MATCHING

  • Text that is not written in terms that users use to search in the major search engines. For example, writing about refrigerators when people actually search for “fridges”. We had a client once who used the phrase “Climate Connections” to refer to Global Warming.
  • Language and internationalization subtleties. For example, color vs colour. When in doubt, check what people are searching for and use exact matches in your content.
  • Language. For example, writing content in Polish when the majority of the people who would visit your website are from Japan.

3. THE “TREE FALLS IN THE FOREST EFFECT”

This is perhaps the most important concept to grasp about the functionality of search engines & the importance of search marketers. Even when the technical details of search-engine friendly web development are correct, content can remain virtually invisible to search engines. This is due to the inherent nature of modern search technology, which rely on the aforementioned metrics of relevance and importance to display results.

The “tree falls in a forest” adage postulates that if no one is around to hear the sound, it may not exist at all – and this translates perfectly to search engines and web content. The major engines have no inherent gauge of quality or notability and no potential way to discover and make visible fantastic pieces of writing, art or multimedia on the web. Only humans have this power – to discover, react, comment and (most important for search engines) link. Thus, it is only natural that great content cannot simply be created – it must be marketed. Search engines already do a great job of promoting high quality content on popular websites or on individual web pages that have become popular, but they cannot generate this popularity – this is a task that demands talented Internet marketers.

THE COMPTETIVE NATURE OF SEARCH ENGINES

Take a look at any search results page and you’ll find the answer to why search marketing, as a practice, has a long, healthy life ahead.

Google Screenshot Yahoo Screenshot Bing Screenshot

10 positions, ordered by rank, with click-through traffic based on their relative position & ability to attract searchers. The fact that so much traffic goes to so few listings for any given search means that there will always be a financial incentive for search engine rankings. No matter what variables may make up the algorithms of the future, websites and businesses will contend with one another for this traffic, branding, marketing & sales goals it provides.

A CONSTANTLY SHIFTING LANDSCAPE

When search marketing began in the mid-1990’s, manual submission, the meta keywords tag and keyword stuffing were all regular parts of the tactics necessary to rank well. In 2004, link bombing with anchor text, buying hordes of links from automated blog comment spam injectors and the construction of inter-linking farms of websites could all be leveraged for traffic. In 2010, social media marketing and vertical search inclusion are mainstream methods for conducting search engine optimization.

The future may be uncertain, but in the world of search, change is a constant. For this reason, along with all the many others listed above, search marketing will remain a steadfast need in the diet of those who wish to remain competitive on the web. Others have mounted an effective defense of search engine optimization in the past, but as we see it, there’s no need for a defense other than simple logic – websites and pages compete for attention and placement in the search engines, and those with the best knowledge and experience with these rankings will receive the benefits of increased traffic and visibility.

Next…Chapter 4 – The Basics of Search Engine Friendly Design & Development

Written by Brent C. Johns of Indian Creek Web Design – www.indiancreekwebdesign.com – 208.703.2392

Indian Creek Web Design

The Beginner’s Guide to SEO: Chapter 2 – How People Interact with Search Engines

One of the most important elements to building an online marketing strategy around SEO and search rankings is feeling empathy for your audience. Once you grasp how the average searcher, and more specifically, your target market, uses search, you can more effectively reach and keep those users.

When this process results in the satisfactory completion of a task, a positive experience is created, both with the search engine and the site providing the information or result. Since the inception of web search, the activity has grown to heights of great popularity, such that in December of 2005, the Pew Internet & American Life Project (PDF Study in Conjunction with ComScore) found that 90% of online men and 91% of online women used search engines. Of these, 42% of the men and 39% of the women reported using search engines every day and more than 85% of both groups say they “found the information they were looking for.

Search engine usage has evolved over the years but the primary principles of conducting a search remain largely unchanged. Listed here are the steps that comprise most search processes:

  1. Experience the need for an answer, solution or piece of information
  2. Formulate that need in a string of words and phrases, also known as “the query.”
  3. Execute the query at a search engine.
  4. Browse through the results for a match.
  5. Click on a result.
  6. Scan for a solution, or a link to that solution.
  7. If unsatisfied, return to the search results and browse for another link or…
  8. Perform a new search with refinements to the query.

A Broad Picture with Fascinating Data

When looking at the broad picture of search engine usage, fascinating data is available from a multitude of sources. We’ve extracted those that are recent, relevant, and valuable, not only for understanding how users search, but in presenting a compelling argument about the power of search (which we suspect many readers of this guide may need to do for their managers):

An April 2010 study by comScore found:

  • Google Sites led the U.S. core search market in April with 64.4 percent of the searches conducted, followed by Yahoo! Sites (up 0.8 percentage points to 17.7 percent), and Microsoft Sites (up 0.1 percentage points to 11.8 percent).
  • Americans conducted 15.5 billion searches in April, up slightly from March. Google Sites accounted for 10 billion searches, followed by Yahoo! Sites (2.8 billion), Microsoft Sites (1.8 billion), Ask Network (574 million) and AOL LLC (371 million).
  • In the April analysis of the top properties where search activity is observed, Google Sites led the search market with 14.0 billion search queries, followed by Yahoo! Sites with 2.8 billion queries and Microsoft Sites with 1.9 billion. Amazon Sites experienced sizeable growth during the month with an 8-percent increase to 245 million searches, rounding off the top 10 ranking.

A July 2009 Forrester report remarked:

  • Interactive marketing will near $55 billion in 2014.
  • This spend will represent 21% of all marketing budgets.

Webvisible & Nielsen produced a 2007 report on local search that noted:

  • 74% of respondents used search engines to find local business information vs. 65% who turned to print yellow pages, 50% who used Internet yellow pages, and 44% who used traditional newspapers.
  • 86% surveyed said they have used the Internet to find a local business, a rise from the 70% figure reported last year (2006.)
  • 80% reported researching a product or service online, then making that purchase offline from a local business.

An August 2008 PEW Internet Study revealed:

  • The percentage of Internet users who use search engines on a typical day has been steadily rising from about one-third of all users in 2002, to a new high of just under one-half (49 percent).
  • With this increase, the number of those using a search engine on a typical day is pulling ever closer to the 60 percent of Internet users who use e-mail, arguably the Internet’s all-time killer app, on a typical day.

A EightFoldLogic (formally Enquisite) report from 2009 on click-through traffic in the US showed:

  • Google sends 78.43% of traffic.
  • Yahoo! sends 9.73% of traffic.
  • Bing sends 7.86% of traffic.

A Yahoo! study from 2007 showed:

  • Online advertising drives in-store sales at a 6:1 ratio to online sales.
  • Consumers in the study spent $16 offline (in stores) to every $1 spent online.

A study on data leaked from AOL’s search query logs reveals:

  • The first ranking position in the search results receives 42.25% of all click-through traffic
  • The second position receives 11.94%, the third 8.47%, the fourth 6.05%, and all others are under 5%
  • The first ten results received 89.71% of all click-through traffic, the next 10 results (normally listed on the second page of results) received 4.37%, the third page – 2.42%, and the fifth – 1.07%. All other pages of results received less than 1% of total search traffic clicks.

All of this impressive research data leads us to some important conclusions about web search and marketing through search engines. In particular, we’re able to make the following assumptions with relative surety:

  • Search is very, very popular. It reaches nearly every online American, and billions of people around the world.
  • Being listed in the first few results is critical to visibility.
  • Being listed at the top of the results not only provides the greatest amount of traffic, but instills trust in consumers as to the worthiness and relative importance of the company/website.
  • An incredible amount of offline economic activity is driven by searches on the web

Written by Brent C. Johns of Indian Creek Web Design – 208.703.2392

The Beginner’s Guide to SEO: Chapter 1 – How Search Engines Work

Search engines have four functions – crawling, building an index, calculating relevancy & rankings and serving results.

  • Crawling and IndexingCrawling and indexing the billions of documents, pages, files, news, videos and media on the world wide web.
  • Providing AnswersProviding answers to user queries, most frequently through lists of relevant pages through retrieval and rankings.
  • Imagine the World Wide Web as a network of stops in a big city subway system.

    Each stop is its own unique document (usually a web page, but sometimes a PDF, JPG or other file). The search engines need a way to “crawl” the entire city and find all the stops along the way, so they use the best path available – links.

    “The link structure of the web serves to bind together all of the pages in existence.”

    (Or, at least, all those that the engines can access.) Through links, search engines’ automated robots, called “crawlers,” or “spiders” can reach the many billions of interconnected documents.

    Once the engines find these pages, their next job is to parse the code from them and store selected pieces of the pages in massive hard drives, to be recalled when needed in a query. To accomplish the monumental task of holding billions of pages that can be accessed in a fraction of a second, the search engines have constructed massive datacenters in cities all over the world.

    These monstrous storage facilities hold thousands of machines processing unimaginably large quantities of information. After all, when a person performs a search at any of the major engines, they demand results instantaneously – even a 3 or 4 second delay can cause dissatisfaction, so the engines work hard to provide answers as fast as possible.

    When a person searches for something online, it requires the search engines to scour their corpus of billions of documents and do two things – first, return only those results that are relevant or useful to the searcher’s query, and second, rank those results in order of perceived value (or importance). It is both “relevance” and “importance” that the process of search engine optimization is meant to influence.

    To the search engines, relevance means more than simply having a page with the words you searched for prominently displayed. In the early days of the web, search engines didn’t go much further than this simplistic step, and found that their results suffered as a consequence. Thus, through iterative evolution, smart engineers at the various engines devised better ways to find valuable results that searchers would appreciate and enjoy. Today, hundreds of factors influence relevance, many of which we’ll discuss throughout this guide.

    Importance is an equally tough concept to quantify, but search engines must do their best.

    Currently, the major engines typically interpret importance as popularity – the more popular a site, page or document, the more valuable the information contained therein must be. This assumption has proven fairly successful in practice, as the engines have continued to increase users’ satisfaction by using metrics that interpret popularity.

    Popularity and relevance aren’t determined manually (and thank goodness, because those trillions of man-hours would require earth’s entire population as a workforce). Instead, the engines craft careful, mathematical equations – algorithms – to sort the wheat from the chaff and to then rank the wheat in order of tastiness (or however it is that farmers determine wheat’s value). These algorithms are often comprised of hundreds of components. In the search marketing field, we often refer to them as “ranking factors” For those who are particularly interested, Indian Creek Web Design crafted a resource specifically on this subject – Search Engine Ranking Factors.

    So How Do I Get Some Success Rolling In? How Search Marketers Study & Learn How to Succeed in the Engines

    The complicated algorithms of search engines may appear at first glance to be impenetrable, and the engines themselves provide little insight into how to achieve better results or garner more traffic. What little information on optimization and best practices that the engines themselves do provide is listed below:

    SEO Information from Yahoo! – Webmaster Guidelines

    Many factors influence whether a particular web site appears in Web Search results and where it falls in the ranking.

    These factors can include:

    • The number of other sites linking to it
    • The content of the pages
    • The updates made to indicies
    • The testing of new product versions
    • The discovery of additional sites
    • Changes to the search algorithm – and other factors

    SEO Information from Bing – Webmaster Guidelines

    Bing engineers at Microsoft recommend the following to get better rankings in their search engine:

    • In the visible page text, include words users might choose as search query terms to find the information on your site.
    • Limit all pages to a reasonable size. We recommend one topic per page. An HTML page with no pictures should be under 150 kb.
    • Make sure that each page is accessible by at least one static text link.
    • Don’t put the text that you want indexed inside images. For example, if you want your company name or address to be indexed, make sure it is not displayed inside a company logo

    SEO Information from Google – Webmaster Guidelines

    Googlers recommend the following to get better rankings in their search engine:

    • Make pages primarily for users, not for search engines. Don’t deceive your users or present different content to search engines than you display to users, which is commonly referred to as cloaking.
    • Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.
    • Create a useful, information-rich site, and write pages that clearly and accurately describe your content. Make sure that your <title> elements and ALT attributes are descriptive and accurate.
    • Keep the links on a given page to a reasonable number (fewer than 100).

    Over the 12 plus years that web search has existed, search marketers have found methodologies to extract information about how the search engines rank pages and use that data to help their sites and their clients achieve better positioning.

    So what you’re telling me is that this is just the tip of the search marketing iceberg and there’s a ton more? YES!

    Surprisingly, the engines do support many of these efforts, though the public visibility is frequently low. Conferences on search marketing, such as the Search Marketing Expo, WebMasterWorld, Search Engine Strategies, & Indian Creek Web Design’s SEO Training Seminars attract engineers and representatives from all of the major engines. Search representatives also assist webmasters by occasionally participating online in blogs, forums & groups.

    Time for an Experiment!

    There is perhaps no greater tool available to webmasters researching the activities of the engines than the freedom to use the search engines to perform experiments, test theories and form opinions. It is through this iterative, sometimes painstaking process, that a considerable amount of knowledge about the functions of the engines has been gleaned.

    1. Register a new website with nonsense keywords (e.g. ishkabibbell.com)
    2. Create multiple pages on that website, all targeting a similarly ludicrous term (e.g. yoogewgally)
    3. Test the use of different placement of text, formatting, use of keywords, link structures, etc by making the pages as uniform as possible with only a singular difference
    4. Point links at the domain from indexed, well-spidered pages on other domains
    1. Record the search engines’ activities and the rankings of the pages
    2. Make small alterations to the identically targeting pages to determine what factors might push a result up or down against its peers
    3. Record any results that appear to be effective and re-test on other domains or with other terms – if several tests consistently return the same results, chances are you’ve discovered a pattern that is used by the search engines.

     

    An Example Test We Whipped Up

    In this test, we started with the hypothesis that a link higher up in a page’s code would carry more weight than a page lower down in the code. We tested this by creating a nonsense domain linking out to three pages, all carrying the same nonsense word exactly once. After the engines spidered the pages, we found that the page linked to from the highest link on the home page ranked first and continued our iterations of testing.

    This process is certainly not alone in helping to educate search marketers.

    Competitive intelligence about signals the engines might use and how they might order results is also available through patent applications made by the major engines to the United States Patent Office. Perhaps the most famous among these is the system that spawned Google’s genesis in the Stanford dormitories during the late 1990’s – PageRank – documented as Patent #6285999 – Method for node ranking in a linked database. The original paper on the subject – Anatomy of a Large-Scale Hypertextual Web Search Engine – has also been the subject of considerable study and edification. To those whose comfort level with complex mathematics falls short, never fear. Although the actual equations can be academically interesting, complete understanding evades many of the most talented and successful search marketers – remedial calculus isn’t required to practice search engine optimization.

    Through methods like patent analysis, experiments, and live testing and tweaking, search marketers as a community have come to understand many of the basic operations of search engines and the critical components of creating websites and pages that garner high rankings and significant traffic.

    The rest of this guide is devoted to explaining these practices clearly and concisely. Enjoy!

    Written By Brent C. Johns of Indian Creek Web Design – 208.703.2392