In the Paid Search end of its business, Google uses Click-Through-Rate (CTR) as a major determining factor in its “Quality Score” calculation, which is a key factor in its Adwords ad auctions. This is because CTR is a proxy for relevance; if enough users click on an ad after performing a keyword search, then it’s reasonable to assume that the ad must be pretty relevant to the keyword. When it finds so, Google’s auction system rewards relevant creative-keyword combinations and penalizes less relevant ones. This ensures a satisfying user experience and Full article»
While keyword density is important (regardless of what you’ve heard to the contrary), it’s not enough to simply say the keyword you want to rank for many times, or even better, the the proper number of times.
Keyword Density tools like Bruce Clay’s can help you figure out how to do that, but it’s important to be engaging to your end-users, and to look natural to search engine algorithms. Peppering related keywords into your writing can help with both of these goals Full article»
It can be hard to keep up with everything going on in the world of Search Engine Optimiization, there are so many great sources to keep track of…Matt Cutts, Bruce Clay, the SEOMoz guys, and so on…fortunately there is a single great source that pulls all of these sources together. And, no, it’s not the Roswell Daily Record…although some of the material in the SEO community certainly qualifies as “consipracy-theory” fodder.
The ultimate source of SEO News, better than any I have seen anywhere else on the planet, is Full article»
There is a company that has been doing content scraping on a level that’s really unimaginable, to the point that it can be regarded as a completely different business model than most others that do this. Like many of the sites you’ve seen out there, its approach is to spider the web and copy other people’s content, then subsections of content are “mashed” together and presented to end-users – essentially auto-generated web pages.
The difference between this company and the myriad of other scrapers our there is in the “mashing up” process Full article»
Buried away in a Google patent application from 2006 entitled “DOCUMENT SCORING BASED ON DOCUMENT INCEPTION DATE“, there is a somewhat obscure reference to using the “entropy” of a document. “Entropy” used in this sense is not simply as it’s defined in the field of physics, where your daughter’s room tends towards a maximum state of disorganization; instead, it refers to its definition in the field of Information Theory, which applies it to information rather than atoms.
Wikipedia has a lengthy entry on this, but you can think of Shannon entropy as essentially measuring how much information is in a document.
If you have a 20,000 word document that simply consists of “all work and no play makes jack a dull boy” repeated 2,000 times Full article»
I’ve seen a number of postings about Google sitelinks, how you can influence them, how Google likely decides whether to assign sitelinks to a website’s entry in the SERPs, etc. Ultimately these Sitelinks are, I think, a terrible thing for Internet Marketers, and the LAST thing you should do is try and influence Google to add them for your site; here’s why.
If your website has come up in a search, then the name of your website is either a brand term, or Full article»
There have been comparatively few articles written in the mainstream SEO blogs about creating content based on formulas – the only folks that seem to cover this topic tend to be from the seedy underside of affiliate marketing, under the term “article spinning”. David Leonhardt’s recent article (more reputable I think), gives some good examples of what the practice entails.
There are various tools available for “spinning” content, but depending on the business Full article»
I am not a fan of the million tiny SEO tools which barely do anything, that can be found on site after site, but every so often someone puts something out that does some heavy lifting and is extremely useful. Every so often when I come across a great free SEO tool that’s worth looking at I’ll do a rundown here.
One old-school, great SEO tool that has been around for awhile is Rex Swain’s HTTP Viewer.
It allows you to examine a single page’s HTTP header (or headers, if it redirects in a chain), and see what is returned Full article»
[***Note - this entry was composed a few weeks ago, as part of my process for queuing up sufficient content for this blog. But today I saw that Rand Fishkin does some debunking of domain age in a Whiteboard Friday today - so I decided to push this posting out - sorry Rand, I must disagree! ***]
First, some background. Google filed a patent application in 1995 which was granted in 2008 titled “Information retrieval based on historical data”. It talks about scoring a document based on the document’s inception date which could be determined in a number of ways Full article»
I was examining backlinks for a website today and noticed that it had two backlinks from the same page (names obscured to protect the innocent):
That’s neither here nor there for the site I was looking at, but it indicates a huge problem for the “foo.com” site where the backlinks are coming from.
A user can type in either Full article»