Quantity or Quality? Measuring Enterprise 2.0

Sep 25, 2008 13:25 · 585 words · 3 minute read

[Crossposted from the Headshift blog]

One of the most common barriers to adoption of social software in enterprise settings is the perception that using social media isn’t “real work”. Instead the term “social” in “social media” is seen as a synonym for frivolous and time-wasting.

To an extent, there are aspects of social media that reinforce this stereotype - Facebook is often cited as the poster-child for this problem. Give your staff access to Facebook, runs the argument, and they’ll spend all their time poking and throwing sheep at each other.

This is actually a debate that’s as old as management itself - I’ve been around long enough to remember the same debates around rolling out email, and I’ve no doubt that phones were seen as a terrible distraction and waste of employee productivity.

At the heart of the issue, it’s a generic management problem and comes down to whether you subscribe to the Theory X or Theory Y view of your staff.

But when applied to Enterprise 2.0, this issue is accentuated by the intangible nature of a lot of social media. How do you apply a value to contributing to a wiki or a forum?

A recent post by the Harvard Business School’s Andrew McAfee looks at this problem, and he’s come up with an intriguing take. His point is that this kind of activity is too broad to reduce to a single metric - and that attempting to do so can cause unexpected side-effects. Measure contributions by volume, and it becomes easy to game the system with quantity overriding quality.

And there are other factors in play, too. Analysis that I’ve seen done of the commenting patterns on a large UK network of blogs show a pronounced long-tail effect in action - active commenters are VERY active, leading to a pronounced power law curve when comparing activity across the population as a whole.

Professor McAfee proposes a multi-dimensional rating, that combines a number of activities. Authoring blog posts, editing wiki pages and contributing to discussions in forums would all build towards an overall rating - and measuring a number of activities allows some rather neat visualisation techniques.

Although this is still fundamentally a volumetric approach, it should also be possible to add in a rating factor - rather like eBay feedback, there are a number of techniques for rating contributions for quality.

I’m sure this model would work to an extent, although it would also likely suffer from eBay’s flaws - negative feedback is disproportionately weighted compared to positive. That’s particularly the case if you’re the proud owner of a flawless feedback record - a single negative rating hurts that in a very visual way. And eBay have in fact moved away from a straight-forward “buyer-rates-seller-rates-buyer” model to something which is slanted more in the buyer’s favour.

And there’s also the issue about how likely you are to negatively rate the contributions of people that you work closely with - frank and honest feedback is a hallmark of some cultures, but I do wonder if there would be a lot of “pulled blows”?

Perhaps the question is whether measuring the effectiveness of Enterprise 2.0 tools at the individual level is actually the right place at all? On the one hand, social tools are predicated on the idea that the total is greater than the sum of the parts, as network effects come into play. So perhaps a truer measure of effectiveness would actually be found by looking at a more aggregated level?