Web Site Ranking: Page Views or Time on Site?

Nielsen has decided to rank site popularity using “time on site” instead of “page views.” Evidently, this will improve accuracy.

My reaction? Whether “page view” or “time on site”, the traffic ranking metric makes little difference: the inaccuracy of nearly all internet traffic ranking systems is due to biased samples of web consumers. Because Nielsen takes some pains to create an unbiased sample of panelists, their ratings will be less deficient than other services. But make no mistake: their results will still be relatively inaccurate.

(And, for what it’s worth, “time on site” may be easy to game than page views. Carnac predicts we’ll see pop-under windows that stay open after a viewer has left a site! )

But forget about stealth pop-unders for now. Let’s assume browser designers figure out how to entirely prevent them. Why will Nielsen rankings be still be biased? How can we devise a good traffic ranking system? I’ll elaborate.

Measurement bias is the source of inaccurate traffic rankings.

A comparison of Alexa traffic ranks for two knitting blogs and Big Buck Blogger suggests that relative traffic estimates across niches can be off by four orders of magnitude. This error is due to the self-selection bias associated with the Alexa traffic bar. Quantcast, Compete and most traffic ranking systems share similar self selection biases.

Nielsen addresses the bias issue by selecting and inviting panelists rather than relying on the self-selection. Nevertheless, I suspect Nielsen will still be biased due to two factors.

Let’s examine Nielsen’s description of their plan.

Under the plan, Nielsen will install software meters on the personal computers of existing and new People Meter panelists. Television panelists who do not want software meters installed on their PCs will not be required to participate in the Internet component of the service. Moreover, since many companies do not allow outside software to be installed on their computers, we will measure Internet viewing at work using an out-of-home meter, which will be introduced in 2008.

Two sources of bias are obvious. Many panelists will likely to refuse to have their surfing habits monitored constantly; others won’t. The personalities and web surfing habits of the two groups will likely differ. Result: Bias.

That bias is nothing compared to the one that will be introduced by erratic monitoring of web surfing outside the home.

Let’s say you become a Nielsen panelist. Will you feel free to plop your out-of-home meter on your desk at work? When you take a quick break from the work you are paid to do, will you want to be seen entering data into the device? Will your boss be thrilled to learn you reported business sensitive surfing habits to Nielsen?

Finally, when your boss phones to chat, will you say, “Wait a sec’ boss, I need to enter my recent web site visits?”

The web surfing done at work will be under-counted. Visits by children, teens, stay at home mothers, and work at home types will be counted. Result: Bias.

How can accurate comparisons be made?

I believe accurate traffic rankings will require publishers to install standard third party tracking scripts on their pages.

Robert Niles of the Annenberg Online Journal Review challenged publishers to do so. But merely challenging publishers to “show theirs” will not work. Too many sites benefit from inaccurate rating systems. Accurate statistics will arrive only when publishers who wish to monetize their sites are forced to install traffic tracking scripts.

There are two groups with the power to force scripts: individual advertisers and middlemen like Pay Per Post, Review Me and Sponsored Reviews. To avoid wasting advertisers’ money, both groups should require tracking scripts.

If forced, nearly any publisher who wishes to monetize their blog directly will install tracking scripts. In short order, bloggers who hope to sell books, become television pundits or win editorial positions on newspapers will also install the tracking scripts. Eventually, the simply vain will also install the tracking scripts.

At that point, we can start worrying about trivialities like “time on site” or “page views”!

So, “page views”? Or “time on site”?

Both. The idea any single metric is “the metric” is idiotic.

It’s fairly obvious that “time on site” is poor as a single metric. Even with a DSL connection, I sometimes tidy up the kitchen while waiting for a video to download. With tabbed browsing, I leave many windows open at a time. Evidently Bryan Eisenberg does that too!

Yet, with Ajax or videos, page views is also flawed. As Duane Forrester points out, all other things being equal, real time spent on a page does reflect good reader engagement.

So, if a traffic ranking service is trying to inform, why try to decide which is “the metric”? It’s easy to report many metrics; advertisers would probably be interested in page views, time on site, and unique visitors in a day, week and month. Why not publish all eight metrics on the list of suggested by About Jeremiah Owyang and Kami Huyse?

Even if advertisers end up focusing on only one or two metrics, there is a huge advantage to publishing ranks based on many metrics. If only one metric is used for ranking, web site designers will design sites that game that metric. If the metric is hits, web site designers will include loads of small images; if the metric is page loads, they’ll break the display into multiple pages. If it’s time on site, they’ll lean toward Ajax and video. If “site on time” becomes the metric that sells advertising, some web designers will try to open small pop-unders that stay open long after a visitor leaves!

But advertisers don’t want gamed rankings; users don’t want web sites designed to game the rankings. So provide ranks based on an metrics.

Anyway, in the end, we all know the real purpose of traffic rankings: Bragging rights. If many are available, the vain can brag about the metric they think “really matters”. I want to brag; you want to brag. So why should Nielsen decide the metric for us?

3 Responses to “Web Site Ranking: Page Views or Time on Site?”

  1. Tricia says:

    I know my sites are being monitored by Nielson because of my contract with the Glam Network. I don’t have access to the data they’ve gathered though. I’m pretty sure when I signed my contract allowing their metrics they were only estimating US visitors. That in itself is a huge fault because no website purely has US only visitors unless they’ve managed to block all other country visits in some way.

  2. Joana says:

    I sincerely doubt my sites are being monitored by them, but hey, I could be wrong.

    From a research perspective, there is always a sample bias no matter how carefully you select your sample group. If you’re too careful you will have added to the bias itself. As you pointed out, one of the most obvious biases will be that people will object to having their habits monitored and won’t sign up.

    But there is also a law that says that researchers and observers will always irrevocably affect the subjects which they are studying by simply studying it. One of the obvious results of this is that people will alter their habits knowing that they are being studied.

    Ah sorry, I’ve rambled quite a bit here.

  3. Nicole says:

    I don’t think my site is being monitored by Nielsen, however I did find the post/article interesting.

    I had no idea that it was even a possibility. And I have to agree with the above posters comment on Alexa. There is no rhyme or reason to how they come up with what they come up with - its simply crazy!

Leave a Reply