Improved search engine rank is attainable through good search engine optimization, part of which is the maximizing of your Google Page Rank through intelligent linking with other web pages. In this first part of 2 on the subject of Google Page Rank, we will look at the argument for attaining high listings through a linking strategy.
Google Page Rank is a buzz term at the moment since many believe it to be more important to your search engine listing than search engine optimization. If we ignore for the moment the fact that Page Rank is, in itself, a form of SEO, then there are arguments for and against that belief.
Before we investigate these arguments, let's understand some fundamentals of search engine listings. First, most search engines list web pages, not domains (websites). What that means is that every web page in a domain has to be relevant to a specific search term if it is to be listed.
Secondly, a search engine customer is the person who is using that engine to seek information. It is not an advertiser or the owner of a website. It is the user seeking information. The form of words that is used by that customer is called a 'search term'. This becomes a 'keyword' when applied to a webmaster trying to anticipate the form of words that a user will employ to search for their information.
A search engine works by analyzing the semantic content of a web page and determining the relative importance of the vocabulary used, taking into account the title tags, the heading tags and the first text it detects. It will also check out text related contextually to what it considers to be the main 'keywords' and then rank that page according to how relevant it calculates it to be for the main theme of the page.
It will then examine the number of other web pages that are linked to it, and regard that as a measure of how important, or relevant to the 'keyword', that the page is. The value of the links is regarded as peer approval of the content. All of these factors determine how high that page is listed for search terms that are similar contextually to the content of the page.
Without doubt, there are web pages that are listed high in the search engine indices that contain very little in the way of useful content on the keywords for which they are listed, and have virtually no contextual relevance to any search term. However, a careful investigation of these sites will reveal two things.
The first is that many such web pages are frequently listed highly only for relatively obscure search terms. If a search engine customer uses a common search term to find the information they are seeking, they will very rarely be led to a site that has little content other than links, but it is possible. The second is that they contain large numbers of links out to other web pages, and it can be assumed that they have at least an equal number of web pages linking back.
It is possible to find such web pages for many keywords. An example is on the first page on Google for the keyword 'Data VOIP Solutions'. There is a website there that is comprised only of links. The site itself has little content, but every link leads to either another website that provides useful content, or another internal page full of more links and no content. That is how links can be used to lift a web page high in the SE listings.
Such sites frequently contain only the bare minimum of conventional search engine optimization, but the competition is so low that they gain high listings. You will also find them to contain large numbers of internal pages, every one of which contain the same internal and external links.
It is true, therefore, that it is possible to get a high listing without much content, but with a large number of links. However, is that a legitimate argument for those promoting links against content? Could you reasonably apply that strategy to your website? Could a genuine website really contain thousands of links to other internal pages and external pages on other websites, and still maintain its intended purpose?
In the second part of this article, titled 'Search Engine Rank: Google Page Rank Misconceptions' I will explode some myths about Page Rank, and explain how many people are wasting their time with reciprocal links, and perhaps even losing through them. It may be that a linking strategy is not so much an option, as a choice between the type of website that you want: to provide genuine information or to make money regardless of content.
Improved search engine rank might be synonymous with Google Page Rank, but perhaps only if you want to sacrifice the integrity of your website.
Part 2
Improved search engine rank is difficult enough to obtain without you having to trawl through all that has been written about Google Page Rank in order to find the truth. There are many misconceptions about Page Rank, and Part 2 of this article dispels the most common of them, the first being that Yahoo and MSN have their own version.
In fact this is not so. Yahoo had a beta version of a 'Web Rank' visible for a while, ranking complete websites, but it is now offline. MSN has no equivalent as far I can ascertain. The term 'PageRank' is a trade mark of Google, which is why I refer to it as Page Rank and not PageRank. A small difference, but a significant one.
If you are one of those that believe that the more links you can get to your website the better, then you are wrong. When Google started the Page Rank frenzy by putting that little green bar on their toolbar, they didn't realize the consequences of what they were doing. People fought to get as many links to their website as possible, irrespective of the nature of the websites to which they were linking.
That is misconception Number 2. You do not link to websites, you link to web pages, or should I say, you get links back from web pages, not websites. It is, after all, the link back that counts isn't it? The link away from your site doesn't count. Wrong! Misconception Number 3. The link to your web page counts no more than the link away from your web page. In fact, it could count less. You could lose out in the reciprocal linking stakes if your web page is worth more than the other person's.
Let's dispel that misconception right now. When you receive a link from a web page (not web site) you get a proportion of the Google Page Rank of that web page that depends on the total number of links leaving that page. When you provide a link to another web page, you give away a proportion of your Page Rank that depends on the number of other links leaving your web page.
The Page Rank of the website you get a link from is irrelevant, since that is generally the rank of the Home Page. You will likely find that all these great links you think you have from PR 7 or 8 websites are from a links page that has a PR of ZERO! So you get zilch for the deal. If you are providing them with a link from a page on your site even of PR 1, then you lose! Most people fail to understand that.
No incoming link can have a negative effect on your PR. It can have a zero effect, but not negative. However, if you have an incoming link with zero effect, and an outgoing reciprocal link with a positive effect to the target page, then you will effectively lose PR through the deal. Every web page starts with a PR of 1, and so has that single PR to share amongst other pages to which it is linked. The more incoming links it has, the higher PR it can have to share out.
If your page has a PR of 4 and has three links leaving it, each gets twice the number of PR votes than if 6 links leave it. Your page with a PR of 4 has to get a similar number of PR votes incoming as it gives away to retain its PR. In simple terms, if your PR 4 page is getting links from a PR 8 page with 20 links leaving it, you lose out big time! It's simple math.
No page ever gives away all of its PR. There is a factor in Google's calculation that reduces this to below 100% of the total PR of any page. However, that is roughly how it works. You don't get a proportion of the whole website ranking; you only get part of the ranking of the page on which your link is placed. Since most 'Links Pages' tend to be full of other outgoing links, then you won't get much, and will likely get zero.
That is why automated reciprocal linking software is often a waste of time. If you want to make the best of linking arrangements, then agree with the other webmaster that you will provide each other with a link from equally ranked pages. That way both of you will gain, and neither loses. Some software allows you to make these arrangements.
Another misconception is that only links from external web pages count. In fact, links between your own web pages can be arranged to provide one page with most of the page rank available. Every page has a start PR of 1, so the more pages you have on your site then the more PR you have to play with and distribute to pages on your website of your choice.
Search engine rank can be improved by intelligent use of links, both external and internal, but Google Page Rank does not have the profound effect on your search engine listing that many have led you to believe. Good onsite SEO usually wins so keep that in mind when designing your website.
Source : http://www.rk-web.net/weblog/index.php?itemid=478
Thursday, August 2, 2007
If Harry Potter Did SEO
SEO (define) has grown to be a bit like one of Harry Potter's adventures. It's a narrative rooted in reason, passion, magic, and a bit of humor. Though traditional wizards and witches are replaced with Web wizards, widgets, and wands, conjuring and concoctions with keystrokes and technological catenations, Harry Potter's fictional world and SEO are not entirely dissimilar.
SEO tactics, much like the travails in the Harry Potter saga, have grown more complex over time. There's a guise of secrecy imbued in constructing strategic SEO strategies, too, particularly for beginners just learning the tricks of the trade and the vast online entities that attempt to conceal their efforts. And just like Lord Voldemort's soul, SEO has been severed into several distinct bits and pieces over the years.
Spiders Rule
Once upon a time, SEO was primarily achieved by understanding machine reading algorithms and initiating automatic submittals. Web wizards first began optimizing sites for search engines in the mid '90s, as the first engines were crawling and indexing information found in the earliest Web content.
Initially, all a Web wizard needed to do was submit a Web page to the various engines. The engines would respond by sending a spider to crawl the page. By further extracting links to other pages, spiders would crawl the entire site. The information on the page would be indexed for further referencing and referrals (except in Yahoo, where contextual algorithmic judgments were actually completed by Muggles).
Search engine spiders -- nearly as prodigious as Aragog's kin -- would read the text on a page and follow mathematical machinations to determine where the information would be positioned in the SERPs (define). Well-schooled Web wizards noticed how search engines responded to specific words found on their site pages. This was truly a time when content was king and reigned alone over an expanding realm of SERPs.
The Flaw in the Plan
Unfortunately, this same period could also be called the Golden Age of Spam, as it was possible gain SERP high rankings by repeating keywords or keyword phrases multiple times, as well as misspelled words, anything with "playboy" in it, general gibberish, and a spattering of gobbledegook, the official language of goblins.
Furthermore, early search engines relied too heavily on keyword metatags as an insightful guide to each page's content. By relying so heavily on factors exclusively within a Web wizard's control, early search engines suffered from abuse and ranking manipulation. Consequently, user experience suffered, too.
These events set Web wizards against the early search engine spiders. A battle ensued, not exactly a great as between Dumbledore and Grindelwald, but it was a battle for relevancy all the same.
Spells and Charms
To provide more relevant results to their users, search engines had to adapt. Rather than serve unrelated pages or malware and pornography disguised as harmless bits of fluff, search engines responded by developing more complex ranking algorithms, taking into account additional factors beyond a Web page that were more difficult for Web wizards to contrive.
Prior to developing Google as graduate students at reality's Hogwarts, Stanford University, Larry Page and Sergey Brin developed a search engine that relied on a mathematical algorithm to rate Web pages' citation analysis. The number calculated by the algorithm, PageRank, is a function of the quantity and contextual strength of inbound links.
To this day, PageRank estimates the likelihood a given page will be reached by a search engine user who randomly surfs the Web and follows links from one page to another. In effect, this means some links are stronger than others, just as some spells and charms are stronger than others.
As more sites reference other sites by way of inbound links, PageRank and search engine positioning is heightened. PageRank has charmed and intrigued the average search engine user for many years now, so naturally Yahoo and MSN Live Search have attempted to emulate Google's spell. And link-building, link-baiting, and social media marketing are now employed as critical SEO tactics. To inhibit the impact of darkly influenced link schemers, search engines consider a diverse array of undisclosed factors for their ranking algorithms.
Happy Endings or Disenchantment?
SEO wizards who employ overtly aggressive tactics can get their clients' Web sites banned from search results. This is highly disenchanting for potential clients and those pursuing the greater SEO good. Yet some search engines continue to reach out to the SEO industry and are frequent sponsors and guests at SEO conferences and seminars, where the dark arts mingle with the white.
In fact, with the advent of paid inclusion and PPC (define) revenues, some search engines now have a vested interest in the optimization community's overall health. Toward this end, the major search engines now provide helpful information and general guidelines that even Squibs can't ignore. While it's not as binding as the International Statute of Secrecy, SEO has come a long way from its humble, darker days to provide happy engines for search engines and search engine users alike.
SEO tactics, much like the travails in the Harry Potter saga, have grown more complex over time. There's a guise of secrecy imbued in constructing strategic SEO strategies, too, particularly for beginners just learning the tricks of the trade and the vast online entities that attempt to conceal their efforts. And just like Lord Voldemort's soul, SEO has been severed into several distinct bits and pieces over the years.
Spiders Rule
Once upon a time, SEO was primarily achieved by understanding machine reading algorithms and initiating automatic submittals. Web wizards first began optimizing sites for search engines in the mid '90s, as the first engines were crawling and indexing information found in the earliest Web content.
Initially, all a Web wizard needed to do was submit a Web page to the various engines. The engines would respond by sending a spider to crawl the page. By further extracting links to other pages, spiders would crawl the entire site. The information on the page would be indexed for further referencing and referrals (except in Yahoo, where contextual algorithmic judgments were actually completed by Muggles).
Search engine spiders -- nearly as prodigious as Aragog's kin -- would read the text on a page and follow mathematical machinations to determine where the information would be positioned in the SERPs (define). Well-schooled Web wizards noticed how search engines responded to specific words found on their site pages. This was truly a time when content was king and reigned alone over an expanding realm of SERPs.
The Flaw in the Plan
Unfortunately, this same period could also be called the Golden Age of Spam, as it was possible gain SERP high rankings by repeating keywords or keyword phrases multiple times, as well as misspelled words, anything with "playboy" in it, general gibberish, and a spattering of gobbledegook, the official language of goblins.
Furthermore, early search engines relied too heavily on keyword metatags as an insightful guide to each page's content. By relying so heavily on factors exclusively within a Web wizard's control, early search engines suffered from abuse and ranking manipulation. Consequently, user experience suffered, too.
These events set Web wizards against the early search engine spiders. A battle ensued, not exactly a great as between Dumbledore and Grindelwald, but it was a battle for relevancy all the same.
Spells and Charms
To provide more relevant results to their users, search engines had to adapt. Rather than serve unrelated pages or malware and pornography disguised as harmless bits of fluff, search engines responded by developing more complex ranking algorithms, taking into account additional factors beyond a Web page that were more difficult for Web wizards to contrive.
Prior to developing Google as graduate students at reality's Hogwarts, Stanford University, Larry Page and Sergey Brin developed a search engine that relied on a mathematical algorithm to rate Web pages' citation analysis. The number calculated by the algorithm, PageRank, is a function of the quantity and contextual strength of inbound links.
To this day, PageRank estimates the likelihood a given page will be reached by a search engine user who randomly surfs the Web and follows links from one page to another. In effect, this means some links are stronger than others, just as some spells and charms are stronger than others.
As more sites reference other sites by way of inbound links, PageRank and search engine positioning is heightened. PageRank has charmed and intrigued the average search engine user for many years now, so naturally Yahoo and MSN Live Search have attempted to emulate Google's spell. And link-building, link-baiting, and social media marketing are now employed as critical SEO tactics. To inhibit the impact of darkly influenced link schemers, search engines consider a diverse array of undisclosed factors for their ranking algorithms.
Happy Endings or Disenchantment?
SEO wizards who employ overtly aggressive tactics can get their clients' Web sites banned from search results. This is highly disenchanting for potential clients and those pursuing the greater SEO good. Yet some search engines continue to reach out to the SEO industry and are frequent sponsors and guests at SEO conferences and seminars, where the dark arts mingle with the white.
In fact, with the advent of paid inclusion and PPC (define) revenues, some search engines now have a vested interest in the optimization community's overall health. Toward this end, the major search engines now provide helpful information and general guidelines that even Squibs can't ignore. While it's not as binding as the International Statute of Secrecy, SEO has come a long way from its humble, darker days to provide happy engines for search engines and search engine users alike.
Subscribe to:
Posts (Atom)