Sunday, May 31, 2009

Role of Links & How they Affect in SEO

Even though we have powerful search engines today to help us find information on the Web, linking from one page to another is still a powerful tool for helping your site get found. And links can group together sites that are relevant, giving you more leverage with search engines than a site without links might have.

Links are the foundation of topical communities, and as such they have as much, if not more, weight with search engine crawlers than keywords do. If you truly want your site to succeed in the search engines, a major part of your SEO strategy must focus on the importance of incoming links. The process of submitting your site to the search engines can take from a few weeks to several months. However, even a new site will be indexed rapidly, if it has incoming links.

There is a fine science to creating a linking strategy, however. It’s not enough just to sprinkle a few links here and there within the pages of your site. There are different types of links that register differently with search engines and it’s even possible to get your web site completely de-listed from search results if you handle your links improperly. When you really begin to consider links and how they affect web sites, you see that links are interconnected in such a way as to be the main route by which traffic moves around the Internet. If you search for a specific term, when you click through the search engine results, you’re taken to another web page. As you navigate through that web page, you may find a link that leads you to another site, and that process continues until you’re tired of surfing the Internet and close your browser. And even if the process starts differently — with you typing a URL directly into your web browser — it still ends the same way.

You can increase incoming links rapidly by participating in forums, provided you use your URL in your signature. Google does not however, appear to give a much weight to this type of incoming link. Submitting to so called link farms is a poor way to attempt to increase links to your site, and is strongly discouraged by Google Guidelines. "Don't participate in link schemes designed to increase your site's ranking or PageRank. In particular, avoid links to web spammers or bad neighborhoods on the web as your own ranking may be affected adversely by those links."

The purpose of links, then, is to first link your web site to others that are relevant to the information included on your site. In addition, links provide a method by which traffic to your site is increased. And isn’t that the reason you’re playing the SEO game? Your desire is to increase the traffic to your site, which in turn increases the number of products that you sell, the number of sales leads you collect, or the number of appointments that you set with highly qualified clients. In short, links lead to increased profit and growth. So of course you’d want to use them on your site.

Another reason links are so important is that links into your site from other web sites serve as “votes” for the value of your site. The more links that lead to your site, the more weight a search engine crawler will give the site, which in turn equates to a better search engine ranking, especially for search engines like Google that use a quality ranking factor, like PageRank.

Thursday, May 28, 2009

What Are Robots, Spiders, and Crawlers?

A robot, spider, or crawler is a piece of software run by search engine program, to build a textaul summary of a website’s content (content index). It creates a text-based summary of content and an address (URL) for each webpage. These are programmed to “crawl” from one web page to another based on the links on those pages. As this crawler makes it way around the Internet, it collects content (such as text and links) from web sites and saves those in a database that is indexed and ranked according to the search engine algorithm.

When a person searches, the keyword(s) they enter are compared with the available website content indexes. Due to the large number of webpages indexed, direct text-only-matching is rare, rather search engines use sophisticated logics (algorithms) to rank potential matches. For example, the underlying information hierarchy of a webpage (semantic markup) may be factored into the ranking a webpage is assigned.

As to what actually happens when a crawler begins reviewing a site, it’s a little more complicated than simply saying that it “reads” the site. The crawler sends a request to the web server where the web site resides, requesting pages to be delivered to it in the same manner that your web browser requests pages that you review. The difference between what your browser sees and what the crawler sees is that the crawler is viewing the pages in a completely text interface. No graphics or other types of media files are displayed. It’s all text, and it’s encoded in HTML. So to you it might look like gibberish.

The crawler can request as many or as few pages as it’s programmed to request at any given time. This can sometimes cause problems with web sites that aren’t prepared to serve up dozens of pages of content at a time. The requests will overload the site and cause it to crash, or it can slow down traffic to a web site considerably, and it’s even possible that the requests will just be fulfilled too slowly and the crawler will give up and go away.

If the crawler does go away, it will eventually return to try the task again. And it might try several times before it gives up entirely. But if the site doesn’t eventually begin to cooperate with the crawler, it’s penalized for the failures and your site’s search engine ranking will fall.

Reasons a URL may not be included in the index

Below is a list of common reasons that a document may not be indexed:
  • ROBOTS.TXT ACCESS DENIES: The site's "/robots.txt" file prevents access to the document.
  • YOUR PAGE IS UNDER CONSTRUCTION. If you can avoid it, you don’t want a crawler to index your site while this is happening. If you can’t avoid it, however, be sure that any pages that are being changed or worked on are excluded from the crawler’s territory. Later, when your page is ready, you can allow the page to be indexed again.
  • PAGES OF LINKS. Having links leading to and away from your site is an essential way to ensure that crawlers find you. However, having pages of links seems suspicious to a search crawler,and it may classify your site as a spam site. Instead of having pages that are all links, break links up with descriptions and text. If that’s not possible, block the link pages from being indexed by crawlers.
  • DYNAMIC PAGES: Dynamic pages are often ignored by the search engine spiders. In fact, any URL containing special symbols like a question mark (?) or an ampersand (&) will be ignored by many engines. Pages generated on the fly from a database often contain these symbols. In this situation, it's important to generate "static" versions of each page you wish to be indexed. In regard to the search engines, the simpler the page is, the better. Does this mean, for example, having a javascript to count visits to the page will prevent you from being indexed, or lower your rankings? No. It simply means that the search engine will most likely ignore the javascript and index the remaining areas of the page. There is evidence that going too far with fancy scripts and code on a page can hurt your rankings if the bulk of your page consists of java or VB scripts.
  • PAGES OF OLD CONTENT. Old content, like blog archives, doesn’t necessarily harm your search engine rankings, but it also doesn’t help them much. One worrisome issue with archives, however, is the number of times that archived content appears on your page. With a blog, for example, you may have the blog appear on the page where it was originally displayed, and also have it displayed in archives, and possibly have it linked from
  • some other area of your site. Although this is all legitimate, crawlers might mistake multiple instances of the same content for spam. Instead of risking it, place your archives off limits to crawlers.
  • REDIRECTS: If your site contains redirects or meta refresh tags these things can sometimes cause the engines to have trouble indexing your site. Generally they will index the page that it is redirecting TO, but if it thinks you are trying to "trick" the engine by using "cloaking" or IP redirection technology that it can detect, there is a chance that it may not index the site at all.
  • PRIVATE INFORMATION. It really makes better sense not to have private information (or proprietary information) on a web site. But if there is some reason that you must have it on your site, then definitely block crawlers from access to it. Better yet, password-protect the information so that no one can stumble on it accidently.

Wednesday, May 20, 2009

Tagging: Social Bookmarking

First of all, what's this tagging business all about? Tagging used to refer just to the tags that you placed in your web site’s HTML to indicate certain types of formatting or commands. Tagging today often refers to something entirely different. When you hear the terms “tagged” or “tagging” in conversation today, it could very well refer to a phenomenon called social bookmarking.

From initial research, it seems that there are two main players in tagging. These are del.icio.us and furl.net. These are sites which make it possible for users to 'tag' any web page. Social bookmarking is a way for Internet users to store, share, classify, and search Internet bookmarks. There is some debate over how important social bookmarking is in SEO, but the consensus seems to be leaning toward the idea that social bookmarking, along with many other social media optimization (SMO) strategies, which I will discussing in future posts, is quickly becoming a serious consideration for SEO.

Social bookmarking is provided by services such as de.icio.us, Digg, Technorati, and Furl.net, which are taking the Internet by storm. All these sites do basically the same thing, allowing users to put a label on a webpage that they have visited, so that they can easily find it again. Users have the option of making their tags public or private (where only the person themselves can see what they have previously tagged) or they can share tagged site information with other individual members. Where the tags are public, other visitors can then see the tags that have been assigned to particular sites by users. They’re often referred to as Web 2.0 services, because they involve a high level of social interaction, which is the fastest growing element of the Internet today.

In social bookmarking, people create their own topics and lists for places on the Internet that they like or dislike. Those people can then give the places they choose a category (or tag) and a rank. Once they’ve ranked a site, they have the option to send that ranking out to anyone who is subscribed to their RSS feed.

The implications this can have on SEO are dramatic. For example, let’s say that one person visits your site during a web search and finds that it’s easy to use, and contains all the information they were looking for. That person could very well tag your site. The tag is then distributed to the people who are subscribed to his or her RSS feed. It’s word-of-mouth marketing — called viral marketing in today’s world — at its best.

One person tells 25, who then visit your site. Then maybe 15 of those people (60 percent) tell another 25 people each. The list keeps growing and growing. So, the question, “Should you pay any attention to social bookmarking?” becomes “How do I take advantage of social bookmarking?” And the answer is, make your site worthy of bookmarking.

Bookmarks appear to web crawlers as links to your page, and that makes them very valuable SEO tools. For some search engines, the more bookmarks that lead back to your site, the more “votes” you have on their popularity scale.

So, visit some of the social bookmarking sites on the Internet. Learn how they work. And set up your own account. Then, create your own list of links that includes your web sites, as well as other web sites that users might find relevant or useful.

On the web-site side, be sure to include the code snippets provided by social bookmarking organizations that allow users to tag your site easily. Then, maintain it all. Don’t just forget your account completely. If you do, eventually it will disappear and all the advantage of having one will go as well. Instead, continue using social bookmarking. Over time, the rewards will be increased traffic to your web site.

How Does Site Tagging Work?

Site tagging, as you already know, is about putting the right HTML commands in the right place. However, the big question from our point of view, is there an SEO benefit to be got from tagging? Well, on face value, the answer is...Yes. The difficulties come in knowing what types of tags to use and what to include in those tags. The basic tags — title, heading, body, and meta tags — should be included in every page that you want a search engine to find. But to make these tags readable to the search engine crawlers, they should be formatted properly. For example, with container tags, you should have both an opening and a closing tag. The opening tag is usually bracketed with two sharp brackets (). The closing tag is also bracketed, but it includes a slash before the tag to indicate that the container is closing ().

Notice that the tag name is repeated in both the opening and closing tags. This just tells the crawler or web browser where a specific type of formatting or attribute should begin and end. So, when you use the Bold tag, only the words between the opening and closing tags will be formatted with a bold-faced font, instead of the entire page being bold. There’s another element of web-site design that you should know and use. It’s called cascading style sheets (CSS) and it’s not a tagging method, but rather a formatting method. You should use CSS so that formatting tags are effective strictly in formatting, while the other tags actually do the work needed to get your site listed naturally by a search crawler. Think of cascading style sheets as boxes, one stacked on top of another. Each box contains something different, with the most important elements being in the top box and decreasing to the least important element in the bottom box. With cascading style sheets, you can set one attribute or format to override another under the right circumstances.

When you’re using an attribute from a CSS, however, it’s easy enough to incorporate it into your web page. The following is a snippet of HTML that uses a cascading style sheet to define the heading colors for a web page:

Looking at this bit of code more closely, you see:

HTML - (This tag indicates that HTML is the language used to create this web page (were this part
of an entire web page).
TITLE - SEO Blog - Heading TITLE indicates the title of the page.
STYLE - This is the beginning of a CSS indicator for the style of the web page. In this case the style applies only to the headings.
H1, H2 { color: green } is the indicator that heading styles one and two should be colored in purple.
STYLE - is the closing CSS indicator.
BODY - indicates the beginning of the body text.
H1 - First Heading H1 is the first header. In the live view of this page on the web, this heading would be purple
P - Enter any text that you would like to paragraph of text.
UL - is the opening tag for an unordered list.
LI - List item one is the first item in your list.
LI - List item two is the next item in your list.
LI - List item three is the last item in your list.
UL - is the closing tag for the unordered list.
H2 - First subheading H2 This is the first subheading. In the live view of this page on the Web, this heading would be purple.
P - Another paragraph of text can go here. Add whatever you like. Again, another paragraph of text.
BODY - is the closing body tag. This indicates that the body text of the web page is complete.
HTML - is the closing HTML tag, which indicates the end of the web page.

It’s not difficult to use CSS for the stylistic aspects of your web site. It does take a little time to get used to using it, but once you do it’s easy. And, when you’re using CSS to control the style of you site, you don’t need to use HTML tags, which means those tags will be much more efficient.

Tuesday, May 12, 2009

What’s so important about site tagging?

As I have mentioned in my previous posts about the HTML tags which are most commonly used in SEO. Some HTML tags are title tags, heading tags, body tags, meta tags, and the alt tag. No web site should be without those tags in the HTML that makes up the site.

However, those tags aren’t the only ones that you should know. In addition, there are several others you might find useful. In fact, a basic understanding of HTML is nearly essential for achieving the best SEO possible for your web site. Sure, you can build a web site using some kind of web design software like Microsoft FrontPage or Adobe Dreamweaver. However, those programs won’t necessarily ensure that all the essential HTML tags are included in your site. It’s far better if you know enough HTML to understand where your HTML tags go, and how to put them there without trashing the design of the site.

There’s also another aspect to tagging your web site, and that’s using the right strategies to ensure the tags are as effective as possible. For example, some HTML tags are strictly for formatting (like the bold tag), but formatting a word with bold doesn’t tell the search engine that the word is important.

Using a more appropriate HTML tag (like strong) works much better. These are all elements of site tagging that you should know. And if you haven’t taken steps to ensure that your site is tagged properly, do it now. Search engine crawlers don’t read web sites or web pages. They read the text on those pages, and HTML is a form of text. With the right HTML tags, you can tell a search engine far more about your site than the content alone will tell it.

Even beyond the keywords and the PPC campaigns, site tagging is one of the most effective ways to ensure that your web site shows up on search engine results pages.

The HTML tags that you include on your web site tell search engine crawlers much more about your site than your content alone will tell them. Don’t misunderstand. Content is an essential element for web-site design. But it’s a more customer-facing portion of the design, whereas HTML is a crawler-facing portion. And before customers will see your content, crawlers must see your HTML.

So when you ask the question, “What’s so important about site tagging?” there’s only one possible answer: Everything. Your SEO ranking will depend in large part on the tagging that controls your page behind the scenes. Customers never see it, but without it, they never see you.