Search engine optimization
search engine's "natural" or un-paid ("organic") search results.[jargon] In general, the earlier (or higher ranked onSearch engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a
the search results
page), and more frequently a site appears in the search results list, the more
visitors it will
receive from the search
engine's users. SEO may target different kinds of search, including image search, local
search, video search, academic search,[1] news search and
industry-specific vertical searchengines.
As an Internet marketing strategy, SEO
considers how search engines work, what people search for, the actual
search terms or keywords
typed into search engines and which search engines are preferred by their
targeted
audience. Optimizing a
website may involve editing its content, HTML and associated coding
to both increase
its relevance to
specific keywords and to remove barriers to the indexing activities of search engines. Promoting
a site to increase the number of backlinks, or inbound links, is another SEO tactic.
The plural of the abbreviation SEO can refer
to "search engine optimizers," those who provide SEO service.
Webmasters and content
providers began optimizing sites for search engines in the mid-1990s, as
the first search
engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page,
or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from
it, and return information found on the page to be indexed. The process involves a search engine spider downloading
engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page,
or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from
it, and return information found on the page to be indexed. The process involves a search engine spider downloading
a page and storing it on the search engine's
own server, where a second program, known as an indexer, extracts various
information
about the page, such as the words it
contains and where these are located, as well as any weight for specific words,
and all links the
page contains, which are then placed
into a scheduler for crawling at a later date
.History
Site owners started to
recognize the value of having their sites highly ranked and visible in search
engine results, creating an opportunity for both white hat and black hat SEO practitioners.
According to industry analyst Danny Sullivan, the phrase "search
engine optimization" probably came into use in 1997.[3] The first documented
use of the term Search Engine Optimization was John Audette and his company
Multimedia Marketing Group as documented by a web page from the MMG site from
August, 1997.[4]
Early versions of
search algorithms relied on
webmaster-provided information such as the keyword meta tag, or index files in engines
like ALIWEB. Meta tags provide a guide
to each page's content. Using meta data to index pages was found to be less
than reliable, however, because the webmaster's choice of keywords in the meta
tag could potentially be an inaccurate representation of the site's actual
content. Inaccurate, incomplete, and inconsistent data in meta tags could and
did cause pages to rank for irrelevant searches.[5][dubious – discuss] Web content providers
also manipulated a number of attributes within the HTML source of a page in an
attempt to rank well in search engines.[6]
By relying so much on
factors such as keyword density which were exclusively within a
webmaster's control, early search engines suffered from abuse and ranking
manipulation. To provide better results to their users, search engines had to
adapt to ensure their results pages showed the most
relevant search results, rather than unrelated pages stuffed with numerous keywords
by unscrupulous webmasters. Since the success and popularity of a search engine
is determined by its ability to produce the most relevant results to any given
search, allowing those results to be false would turn users to find other
search sources. Search engines responded by developing more complex ranking
algorithms, taking into account additional factors that were more difficult for
webmasters to manipulate. Graduate students at Stanford University, Larry Page and Sergey Brin, developed
"Backrub," a search engine that relied on a mathematical algorithm to
rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the
quantity and strength of inbound links.[7] PageRank estimates
the likelihood that a given page will be reached by a web user who randomly
surfs the web, and follows links from one page to another. In effect, this
means that some links are stronger than others, as a higher PageRank page is
more likely to be reached by the random surfer.
Page and Brin founded Google in 1998. Google
attracted a loyal following among the growing number of Internet users, who
liked its simple design.[8] Off-page factors
(such as PageRank and hyperlink analysis) were considered as well as on-page
factors (such as keyword frequency, meta tags, headings, links and site
structure) to enable Google to avoid the kind of manipulation seen in search
engines that only considered on-page factors for their rankings. Although
PageRank was more difficult to game, webmasters had already developed link
building tools and schemes to influence the Inktomi search engine, and
these methods proved similarly applicable to gaming PageRank. Many sites
focused on exchanging, buying, and selling links, often on a massive scale.
Some of these schemes, or link farms, involved the creation of
thousands of sites for the sole purpose of link spamming.[9]
By 2004, search engines had
incorporated a wide range of undisclosed factors in their ranking algorithms to
reduce the impact of link manipulation. In June 2007, The New York Times' Saul
Hansell stated Google ranks sites using more than 200 different signals.[10] The leading search
engines, Google, Bing, and Yahoo, do not disclose the
algorithms they use to rank pages. Some SEO practitioners have studied
different approaches to search engine optimization, and have shared their
personal opinions[11] Patents related to
search engines can provide information to better understand search engines.[12]
In 2005, Google began
personalizing search results for each user. Depending on their history of
previous searches, Google crafted results for logged in users.[13] In 2008, Bruce Clay
said that "ranking is dead" because of personalized search. He opined that it would
become meaningless to discuss how a website ranked, because its rank would
potentially be different for each user and each search.[14]
In 2007, Google announced a
campaign against paid links that transfer PageRank.[15] On June 15, 2009,
Google disclosed that they had taken measures to mitigate the effects of
PageRank sculpting by use of the nofollow attribute on
links. Matt
Cutts, a
well-known software engineer at Google, announced that Google Bot would no
longer treat nofollowed links in the same way, in order to prevent SEO service
providers from using nofollow for PageRank sculpting.[16] As a result of this
change the usage of nofollow leads to evaporation of pagerank. In order to
avoid the above, SEO engineers developed alternative techniques that replace
nofollowed tags with obfuscated Javascript and thus permit
PageRank sculpting. Additionally several solutions have been suggested that
include the usage of iframes, Flash and Javascript.[17]
In December 2009, Google
announced it would be using the web search history of all its users in order to
populate search results.[18]
Google Instant, real-time-search, was
introduced in late 2010 in an attempt to make search results more timely and
relevant. Historically site administrators have spent months or even years
optimizing a website to increase search rankings. With the growth in popularity
of social media sites and blogs the leading engines made changes to their
algorithms to allow fresh content to rank quickly within the search results.[19]
In February 2011, Google
announced the "Panda update, which penalizes websites
containing content duplicated from other websites and sources. Historically
websites have copied content from one another and benefited in search engine
rankings by engaging in this practice, however Google implemented a new system
which punishes sites whose content is not unique.[20]
In April 2012, Google
launched the Google
Penguin update
the goal of which was to penalise websites that used manipulative techniques to
improve their rankings on the search engine.[21]
Relationship
with search engines

Yahoo and Google offices
By 1997, search engines
recognized that webmasters were making efforts to rank well in
their search engines, and that some webmasters were even manipulating their
rankings in
search results by stuffing pages with excessive or irrelevant keywords. Early
search engines, such as Altavista and Infoseek, adjusted their algorithms
in an effort to prevent webmasters from manipulating rankings.[22]
In 2005, an annual conference,
AIRWeb, Adversarial Information Retrieval on the Web was created to bring
together practitioners and researchers concerned with search engine
optimisation and related topics.[23]
Companies that employ
overly aggressive techniques can get their client websites banned from the
search results. In 2005, the Wall Street Journal reported on a
company, Traffic
Power,
which allegedly used high-risk techniques and failed to disclose those risks to
its clients.[24] Wired magazine reported
that the same company sued blogger and SEO Aaron Wall for writing about the
ban.[25] Google'sMatt Cutts later confirmed that
Google did in fact ban Traffic Power and some of its clients.[26]
Some search engines have
also reached out to the SEO industry, and are frequent sponsors and guests at
SEO conferences, chats, and seminars. Major search engines provide information
and guidelines to help with site optimization.[27][28] Google has a Sitemaps program to help
webmasters learn if Google is having any problems indexing their website and
also provides data on Google traffic to the website.[29] Bing Toolbox provides a way from
webmasters to submit a sitemap and web feeds, allowing users to determine the
crawl rate, and how many pages have been indexed by their search engine.
Methods
Suppose each circle is a
website, and an arrow is a link from one website to another, such that a user
can click on a link within, say, website F to go to website B, but not vice
versa. Search engines begin by assuming that each website has an equal chance
of being chosen by a user. Next, crawlers examine which websites link to which
other websites and guess that websites with more incoming links contain
valuable information that users want.
Search engines uses complex
mathematical algorithms to guess which websites a user seeks, based in part on
examination of how websites link to each other. Since website B is the
recipient of numerous inbound links, B ranks highly in a web search, and will
come up early in a web search. Further, since B is popular, and has an outbound
link to C, C ranks highly too.
Getting
indexed
The leading search engines,
such as Google, Bing and Yahoo!, use crawlers to find pages for
their algorithmic search results. Pages that are linked from other search
engine indexed pages do not need to be submitted because they are found
automatically. Some search engines, notably Yahoo!, operate a paid submission
service that guarantee crawling for either a set fee or cost per click.[30] Such programs usually
guarantee inclusion in the database, but do not guarantee specific ranking
within the search results.[31] Two major
directories, the Yahoo Directory and the Open Directory Project both require manual
submission and human editorial review.[32] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created
and submitted for free to ensure that all pages are found, especially pages
that are not discoverable by automatically following links.[33]
Search engine crawlers may look at
a number of different factors when crawling a site. Not every
page is indexed by the search engines. Distance of pages from the root
directory of a site may also be a factor in whether or not pages get crawled.[34]
Preventing
crawling
Main article: Robots Exclusion Standard
To avoid undesirable
content in the search indexes, webmasters can instruct spiders not to crawl
certain files or directories through the standard robots.txt file in the root
directory of the domain. Additionally, a page can be explicitly excluded from a
search engine's database by using a meta tag specific to robots.
When a search engine visits a site, the robots.txt located in the root directory is the first file
crawled. The robots.txt file is then parsed, and will instruct the robot as to
which pages are not to be crawled. As a search engine crawler may keep a cached
copy of this file, it may on occasion crawl pages a webmaster does not wish
crawled. Pages typically prevented from being crawled include login specific
pages such as shopping carts and user-specific content such as search results from
internal searches. In March 2007, Google warned webmasters that they should
prevent indexing of internal search results because those pages are considered
search spam.[35]
Increasing
prominence
A variety of methods can
increase the prominence of a webpage within the search results. Cross linkingbetween pages of the same
website to provide more links to most important pages may improve its
visibility.[36] Writing content that
includes frequently searched keyword phrase, so as to be relevant to a wide variety
of search queries will tend to increase traffic.[36] Updating content so
as to keep search engines crawling back frequently can give additional weight
to a site. Adding relevant keywords to a web page's meta data, including
the title
tag and meta description, will tend to improve the relevancy of a site's search
listings, thus increasing traffic. URL normalization of web pages
accessible via multiple urls, using the canonical link element[37] or via 301 redirects can help make sure
links to different versions of the url all count towards the page's link
popularity score.
White
hat versus black hat techniques
SEO techniques can be
classified into two broad categories: techniques that search engines recommend
as part of good design, and those techniques of which search engines do not
approve. The search engines attempt to minimize the effect of the latter, among
them spamdexing. Industry commentators
have classified these methods, and the practitioners who employ them, as
either white
hat SEO,
or black
hat SEO.[38] White hats tend to
produce results that last a long time, whereas black hats anticipate that their
sites may eventually be banned either temporarily or permanently once the
search engines discover what they are doing.[39]
An SEO technique is
considered white hat if it conforms to the search engines' guidelines and
involves no deception. As the search engine guidelines[27][28][40] are not written as a
series of rules or commandments, this is an important distinction to note.
White hat SEO is not just about following guidelines, but is about ensuring
that the content a search engine indexes and subsequently ranks is the same
content a user will see. White hat advice is generally summed up as creating
content for users, not for search engines, and then making that content easily
accessible to the spiders, rather than attempting to trick the algorithm from
its intended purpose. White hat SEO is in many ways similar to web development
that promotes accessibility,[41] although the two are
not identical.
Black hat SEO attempts to improve
rankings in ways that are disapproved of by the search engines, or involve
deception. One black hat technique uses text that is hidden, either as text
colored similar to the background, in an invisible div, or positioned off screen.
Another method gives a different page depending on whether the page is being
requested by a human visitor or a search engine, a technique known as cloaking.
Search engines may penalize
sites they discover using black hat methods, either by reducing their rankings
or eliminating their listings from their databases altogether. Such penalties
can be applied either automatically by the search engines' algorithms, or by a
manual site review. One example was the February 2006 Google removal of
both BMWGermany and Ricoh Germany for use of
deceptive practices.[42] Both companies,
however, quickly apologized, fixed the offending pages, and were restored to
Google's list.[43]
SEO
copywriting
Search engine optimization
(SEO) copywriting is textual composition for web page marketing that emphasizes
skillful manipulation of the page's wording to place it among the first results
of a user's search list, while still
producing readable and persuasive content.
Technical
details
Main article: Search
engine optimization
Crawlers rely upon keyword
placement within the text of an article, and typically disregard images.[44] Text appearing in
several key locations (such as the <title> and <meta>tags of the page's code)
gets special attention because search engines compare information found there
with other pages to determine relevance. SEO copywriters also strive for unique
written content on the page, distinguishing it from similar pages competing for
placement in the search results. Other factors that determine relevance during
a search are the page's keyword density, the placement of the
keywords, and the number of links to and from the page from other pages.
Professional
role
SEO copywriting is most
often one of the various jobs of a copywriter. However, there are freelance copywriters who hire
out their services solely for SEO, agencies and firms that specialize in SEO
(including SEO copywriting), and copywriting agencies that offer SEO
copywriting as part of comprehensive writing and editing services.
A freelance SEO copywriter
will work with a client to determine the appropriate keywords needed to promote
the client's business. Online keyword research tools are then used to gather a
list of potential phrases.
While an obvious goal of
SEO copywriting is to cause the business's or product's web page to rank highly
in a search, most experts in the field would argue that it is of secondary
priority. The foremost goal of SEO copywriting is to produce succinct,
effectively persuasive text for a well-written web page that will motivate the
reader to take action. Writing that "optimizes" a search but offers
little useful information or only weak persuasion is frowned upon in the
profession as ineffective. At its worst, it becomes a costly resource inducing
potential buyers to turn away from the site rather than generating sales. The
main goal of the SEO copywriter remains writing interesting content that people
want to read and link to.
SEO copywriters often work
with "optimizers" who are more expert in the technical aspects of
SEO. Together they will not only rewrite text but also alter the code to design
a page that is most favored by search engines. It is not a clear, scientific
process, however. Attempting to keep themselves competitive and defending
against the composition strategies of so-called black hat SEOs, search engine
designers today do not disclose the complex algorithmic processes of their
search engines. In spite of the insights of optimizing technicians, SEO
copywriting requires finesse and repeated experimentation to assess how the
team's page revisions will fare in a potential customer's search.
As
a marketing strategy
SEO is not an appropriate
strategy for every website, and other Internet marketing strategies can be more
effective, depending on the site operator's goals.[45] A successful Internet
marketing campaign may also depend upon building high quality web pages to
engage and persuade, setting up analytics programs to enable
site owners to measure results, and improving a site's conversion rate.[46]
SEO may generate an
adequate return
on investment.
However, search engines are not paid for organic search traffic, their
algorithms change, and there are no guarantees of continued referrals. Due to
this lack of guarantees and certainty, a business that relies heavily on search
engine traffic can suffer major losses if the search engines stop sending
visitors.[47] Search engines can
change their algorithms, impacting a website's placement, possibly resulting in
a serious loss of traffic. According to Google's CEO, Eric Schmidt, in 2010,
Google made over 500 algorithm changes – almost 1.5 per day.[48] It is considered wise
business practice for website operators to liberate themselves from dependence
on search engine traffic.[49]
International
markets
Optimization techniques are
highly tuned to the dominant search engines in the target market. The search
engines' market shares vary from market to market, as does competition. In
2003, Danny
Sullivan stated
that Google represented about 75% of all searches.[50] In markets outside
the United States, Google's share is often larger, and Google remains the
dominant search engine worldwide as of 2007.[51] As of 2006, Google
had an 85–90% market share in Germany.[52] While there were
hundreds of SEO firms in the US at that time, there were only about five in
Germany.[52] As of June 2008, the
marketshare of Google in the UK was close to 90% according to Hitwise.[53] That market share is
achieved in a number of countries.
As of 2009, there are only
a few large markets where Google is not the leading search engine. In most
cases, when Google is not leading in a given market, it is lagging behind a
local player. The most notable markets where this is the case are China, Japan,
South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver,Yandex and Seznam are market leaders.
Successful search
optimization for international markets may require professional translation of web pages,
registration of a domain name with a top level domain in the target market,
and web
hosting that
provides a local IP address. Otherwise, the fundamental elements of
search optimization are essentially the same, regardless of language.[52]
Legal
precedents
On October 17, 2002,
SearchKing filed suit in the United States District Court, Western District of
Oklahoma, against the search engine Google. SearchKing's claim was that
Google's tactics to prevent spamdexing constituted a tortious interference with contractual
relations. On May 27, 2003, the court granted Google's motion to dismiss the
complaint because SearchKing "failed to state a claim upon which relief
may be granted."[54][55]
In March 2006, KinderStart filed a lawsuit
against Google over search engine
rankings. Kinderstart's website was removed from Google's index prior to the
lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007
the United
States District Court for the Northern District of California (San
Jose Division) dismissed KinderStart's complaint without leave to amend,
and partially granted Google's motion for Rule 11 sanctions against
KinderStart's attorney, requiring him to pay part of Google's legal expenses.
its all about SEO
ReplyDelete