This article discusses the basics of SEO, short for "search engine optimization", and is divided into the following sections:
• Overview of Search
• Definition of SEO
• Types of SEO
Overview of Search
When a person searches for something at a search engine the results that are returned by the search engine are referred to as "search results" (SEO professionals refer to this as "SERPs", which stands for Search Engine Return Placements).
The thing that is searched for is referred to as the "search phrase".
Keywords are words that are most important to a web site (e.g. "insurance"), while key phrases are multiple keywords that people search for (e.g. "term life insurance"). Very few people search on single words, rather they search for phrases.
NOTE: when you are searching for something be as specific as possible; this will result in the most relevant search result items.
Definition of SEO
Search engine optimization is a generic term that encompasses a variety of things. As such it often has a different meaning to people, depending on what they have read or researched.
Simply put, search engine optimization refers to the act of improving a web site's ranking (i.e. placement) within search results.
Types of SEO
SEO can be categorized into two specific initiatives: on-site optimization and off-site optimization.
On-Site Optimization
On-site optimization refers to work done on a web site, which can be the modification of existing content as well as the addition of new content.
Examples of on-site optimization are:
• Fixing domain issues (e.g. multiple domains or sub-domains that all resolve to the same site, resulting in duplicate content problems)
• Page level optimization of Title tags and meta tags (although meta tags aren't nearly as important as they used to be)
• Using key phrases within important page sections such as anchor (link) text, header tags (titles), and the first and last paragraphs of a page
• Using keywords in the site structure (e.g. folder names, filenames and even the domain itself)
The trick to on-site optimization is having optimized text that: reads well; offers solutions to readers (as opposed to being product-centric); and elicits a favourable emotional response from the user.
It is important to mention that a Professional SEO campaignalways starts with market research and keyword research (the scope of which is outside this article).
Off-Site Optimization
Off-site optimization refers to work that does not involve direct modification of a web site, yet affects the search engine ranking of the site.
There are two main categories of off-site optimization: link building and social media involvement.
Link building refers to getting hyperlinks from other sites that point to your site (either the home page or other pages within your site). These "backlinks" are critical when it comes to ranking well in the search engines, and backlinks establish relevancy and authority for target key phrases. One of Google's ranking factors (of which there are over two hundred) is Page Rank, which measures the importance of a site for a given key phrase by looking at backlinks to a site.
Some methods by which backlinks can be procured are:
• Guest blogging
• Article writing
• Directory entries
• Blog and forum commenting
• Social media entries (e.g. "tweets, "diggs", "stumbles", etc.)
• Press releases
Social media involvement has officially been acknowledged by major search engines as factors in their ranking algorithms. In addition it also establishes relevancy and authority for target key phrases.
Note that when it comes to an SEO campaign it is the off-site optimization that is by far the most time-consuming of the work that needs to be done!
Friday, October 21, 2011
Tuesday, April 12, 2011
The New Parameters in Google Algorithm
Online PR News – 12-April-2011 –Webmasters across the world particularly those who are engaged in Google SEO and Google Optimization should know that Google has made some significant changes in its algorithm. This change is very significant since it has prompted a major change in the overall ranking system of Google. As more and more people are doing Best SEO to obtain higher ranking in the search engine result pages of Google it has become very difficult for Google to identify the quality ones. It can no longer rely solely on backlinks to distinguish between the good and the bad.
Therefore it has become very important for Google to introduce a stricter filtering process. It is now taking into consideration many new things alongside the one it had relied on earlier. The new things that it is taking into consideration are:
Domain Related Factors:
1.The past record of a domain like how often it changed IP
2.Domains past owners i.e how often the owner was changed
3.The external mention of the domain (non linked)
4.The geo targeted setting in Google Webmaster tool.
5.Use of the Keyword in the domain.
Site Architecture Factors:
1.Website url structure
2.site navigation structure
3.Use of external CSS / JS files;
4.Website structure accessibility (use of inaccessible navigation, JavaScript, etc);
5.Use of canonical URLs;
6.“Correct” HTML code (?);
7.Cookies usage;
Content Factors:
1.Updated content (frequently updated content gets preference)
2.Content uniqueness (duplicate content invites penalty)
3. Pure text content ratio (without links, images, code, etc)
4. Keyword density (ideal keyword density 2-5%)
5. Rampant mis-spelling of words, bad grammar, and 10,000 word screeds without punctuation;
Internal Linking Factors:
1 . No of internal links to page;
2 . No of internal links to page with identical / targeted anchor text;
3 . No of internal links to page from content (instead of navigation bar, breadcrumbs, etc);
4 . No of links using “nofollow” attribute; (?)
5 . Internal link density,
Website factors
1.Use of Robot.txt
2.Overall site update frequency
3.Overall site size
4.Amount of time passed after being indexed by Google
5.Use of XML sitemap
6.On-page trust flags (Contact info (for local search even more important), Privacy policy, TOS,
and similar);
7.Website type (e.g. blog instead of informational sites in top 10)
These are some of the factors that Google is taking into account in its latest updated algorithm.
Contact Information:
Therefore it has become very important for Google to introduce a stricter filtering process. It is now taking into consideration many new things alongside the one it had relied on earlier. The new things that it is taking into consideration are:
Domain Related Factors:
1.The past record of a domain like how often it changed IP
2.Domains past owners i.e how often the owner was changed
3.The external mention of the domain (non linked)
4.The geo targeted setting in Google Webmaster tool.
5.Use of the Keyword in the domain.
Site Architecture Factors:
1.Website url structure
2.site navigation structure
3.Use of external CSS / JS files;
4.Website structure accessibility (use of inaccessible navigation, JavaScript, etc);
5.Use of canonical URLs;
6.“Correct” HTML code (?);
7.Cookies usage;
Content Factors:
1.Updated content (frequently updated content gets preference)
2.Content uniqueness (duplicate content invites penalty)
3. Pure text content ratio (without links, images, code, etc)
4. Keyword density (ideal keyword density 2-5%)
5. Rampant mis-spelling of words, bad grammar, and 10,000 word screeds without punctuation;
Internal Linking Factors:
1 . No of internal links to page;
2 . No of internal links to page with identical / targeted anchor text;
3 . No of internal links to page from content (instead of navigation bar, breadcrumbs, etc);
4 . No of links using “nofollow” attribute; (?)
5 . Internal link density,
Website factors
1.Use of Robot.txt
2.Overall site update frequency
3.Overall site size
4.Amount of time passed after being indexed by Google
5.Use of XML sitemap
6.On-page trust flags (Contact info (for local search even more important), Privacy policy, TOS,
and similar);
7.Website type (e.g. blog instead of informational sites in top 10)
These are some of the factors that Google is taking into account in its latest updated algorithm.
Contact Information:
Monday, April 11, 2011
Future of Google digital library is hard to read
It was a glittering dream: A vast worldwide digital library, tens of millions of books all in one easily accessible place . . . named Google.
Now that dream has been denied, and soon dreamers will meet to see whether they can fashion a more workable vision - one that will pass legal muster.
In a Manhattan court March 22, U.S. Circuit Judge Denny Chin struck down an agreement among search engine Google, the Authors Guild, and the Association of American Publishers. The pact would have let Google sell access to its ever-growing database of more than 15 million digitized books. But no. The decision, a pivotal moment in the history of electronic books and libraries, stands firm on traditional notions of copyright, monopolies, and privacy. With the agreement rejected, all sides will huddle April 25 to see whether there's a next step.
"I'd love to be a fly on the wall at that meeting," says Corynne McSherry, intellectual-property director at the Electronic Frontier Foundation, which filed an objection in the case along with the American Civil Liberties Union.
"I don't know how they're going to work it out," says Ken Auletta, author of Googled: The End of the World as We Know It.
It's been a twisty-turny journey. In 2002, Google began scanning books into its database. In 2004, it launched Google Search (later renamed Google Books), by which users could view snippets and download, for a fee, public-domain books (those to which no one holds a copyright). Google partnered with places such as Harvard, Michigan, Stanford, and Oxford Universities and began to digitize their holdings.
But many of those books were under copyright, prompting the Authors Guild and the Publishers Association to sue in 2005.
A settlement was reached in 2008. Tellingly, Google agreed to pay $125 million to search for copyright holders and pay authors and publishers fees and royalties. Auletta says, "They were, in effect, acknowledging there's such a thing as copyright. That's a huge admission for a digital company to make."
But in 2009, the U.S. Department of Justice, worried about giving big Google a monopoly, balked. The agreement was amended, and last year it reached Chin's desk. The digital world had been waiting for the outcome.
Now that dream has been denied, and soon dreamers will meet to see whether they can fashion a more workable vision - one that will pass legal muster.
In a Manhattan court March 22, U.S. Circuit Judge Denny Chin struck down an agreement among search engine Google, the Authors Guild, and the Association of American Publishers. The pact would have let Google sell access to its ever-growing database of more than 15 million digitized books. But no. The decision, a pivotal moment in the history of electronic books and libraries, stands firm on traditional notions of copyright, monopolies, and privacy. With the agreement rejected, all sides will huddle April 25 to see whether there's a next step.
"I'd love to be a fly on the wall at that meeting," says Corynne McSherry, intellectual-property director at the Electronic Frontier Foundation, which filed an objection in the case along with the American Civil Liberties Union.
"I don't know how they're going to work it out," says Ken Auletta, author of Googled: The End of the World as We Know It.
It's been a twisty-turny journey. In 2002, Google began scanning books into its database. In 2004, it launched Google Search (later renamed Google Books), by which users could view snippets and download, for a fee, public-domain books (those to which no one holds a copyright). Google partnered with places such as Harvard, Michigan, Stanford, and Oxford Universities and began to digitize their holdings.
But many of those books were under copyright, prompting the Authors Guild and the Publishers Association to sue in 2005.
A settlement was reached in 2008. Tellingly, Google agreed to pay $125 million to search for copyright holders and pay authors and publishers fees and royalties. Auletta says, "They were, in effect, acknowledging there's such a thing as copyright. That's a huge admission for a digital company to make."
But in 2009, the U.S. Department of Justice, worried about giving big Google a monopoly, balked. The agreement was amended, and last year it reached Chin's desk. The digital world had been waiting for the outcome.
Subscribe to:
Posts (Atom)