Content deleted Content added
Line 10:
== History ==
[[Webmaster]]s and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early [[World Wide Web|Web]]. Initially, all webmasters needed to do was submit the address of a page, or [[Uniform Resource Locator|URL]], to the various engines which would send a "[[Web crawler|spider]]" to "crawl" that page, extract links to other pages from it, and return information found on the page to be [[Index (search engine)|indexed]].<ref>{{cite web | url=http://www.webir.org/resources/phd/pinkerton_2000.pdf| format =PDF | title=Finding What People Want: Experiences with the WebCrawler|accessdate=2007-05-07| publisher=The Second International WWW Conference Chicago, USA, October 17–20, 1994|author=Brian Pinkerton}}</ref> The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an [[search engine indexing|indexer]], extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date. http://www.topcarcollection.com
Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both [[white hat]] and [[black hat]] SEO practitioners. According to industry analyst [[Danny Sullivan (technologist)|Danny Sullivan]], the phrase "search engine optimization" probably came into use in 1997.<ref>{{cite web|url=http://forums.searchenginewatch.com/showpost.php?p=2119&postcount=10|title=Who Invented the Term "Search Engine Optimization"?|author=Danny Sullivan|publisher=[[Search Engine Watch]]|date=June 14, 2004|accessdate=2007-05-14}} See [http://groups.google.com/group/alt.current-events.net-abuse.spam/browse_thread/thread/6fee2777dc17b8ab/3858bff94e56aff3?lnk=st&q=%22search+engine+optimization%22&rnum=1#3858bff94e56aff3 Google groups thread].</ref> The first documented use of the term Search Engine Optimization was [http://www.thehistoryofseo.com/seo-interviews/john-audette/ John Audette] and his company Multimedia Marketing Group as documented by a web page from the MMG site from August, 1997. <ref>{{cite web|url=http://www.mmgco.com/campaign.html (Document Number 19970801004204)| title=Documentation of Who Invented SEO at the Internet Way Back Machine | source=Internet Way Back Machine |archiveurl=http://web.archive.org/web/19970801004204/www.mmgco.com/campaign.html (Document Number 19970801004204) |archivedate=1997-08-01}}</ref>
Line 19:
Early versions of search [[algorithm]]s relied on webmaster-provided information such as the keyword [[meta tag]], or index files in engines like [[Aliweb|ALIWEB]]. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.<ref>{{cite web| url=http://www.e-learningguru.com/articles/metacrap.htm|title=Metacrap: Putting the torch to seven straw-men of the meta-utopia|author=[[Cory Doctorow]]|date=August 26, 2001|publisher=e-LearningGuru|accessdate=2007-05-08 |archiveurl = http://web.archive.org/web/20070409062313/http://www.e-learningguru.com/articles/metacrap.htm |archivedate = 2007-04-09}}</ref> Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.<ref>{{cite web |url=http://www.csse.monash.edu.au/~lloyd/tilde/InterNet/Search/1998_WWW7.html|title=What is a tall poppy among web pages?|month=April | year=1998|publisher=Proc. 7th Int. World Wide Web Conference|accessdate=2007-05-08|author=Pringle, G., Allison, L., and Dowe, D.}}</ref>
By relying so much on factors such as [[keyword density]] which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their [[SERP|results pages]] showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. http://www.funnyphotosofanimals.com
Graduate students at [[Stanford University]], [[Larry Page]] and [[Sergey Brin]], developed "backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, [[PageRank]], is a function of the quantity and strength of [[inbound link]]s.<ref name="lgscalehyptxt">{{cite web|author=Brin, Sergey and Page, Larry|url=http://www-db.stanford.edu/~backrub/google.html|title=The Anatomy of a Large-Scale Hypertextual Web Search Engine|publisher=Proceedings of the seventh international conference on World Wide Web|year=1998|pages=107–117|accessdate=2007-05-08}}</ref> PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
|