compulsory reading, web 2.0

New Economy. Does it really look that old?

Sometimes I am simply depressed to which extent indirect perception, ie the way we think about the stuff out there which we can’t sense ourselves, is shaped by people whose perceptions are themselves shaped by the very same mechanism.

Let’s face it, it’s not really the case that the amount of possible interpretations of “reality” increase proportionally with the amount of people transmitting and reshaping them. Of course, it’s probably true that some interpretations are plainly wrong and proportionality should not be expected if those we rely on to present a fair picture are actually worth their price. But I would argue that instead of mutual control and even possible cross-fertilisation we can witness a lot of autocatalytic feedbackslopes.

As soon as one possible scheme of interpretation has become predominant, it becomes indeed very difficult to argue against it. This is as true for general media, as it is for scientific paradigms. This is basically what Thomas Kuhn said about scientific progress – it’s about being on the right side of the argument at the right time, not about being right. Because there is no truth apart from what we make true. Now consider the opportunity a general media hype presents for a scientific community in search of outlets for their vision of the world. What would happen?

One possibility is the “New Economy“. Whatever the possible economic content embodied in that concept, there was a time in the late 1990s when everybody wanted to believe that humanity had indeed reached “Business 2.0” (I’d say, if anything, it should have been business 4.0, version 1 being hunting and gathering, 2 the agricultural economy, 3 industrialisation). When the bubble burst, the public felt devceived by the prophets and turned to those whose opinion had been largely ignored just a little bit earlier, those who now sensed that bashing all about “new economy” was the right thing to do (now here you realise why stock market analysts are a high risk group for schizophrenia, being obliged to do bash now and justify their earlier recommendations). I’d say, we’re still in the latter phase of dealing with the recent economic past, as, eg, this article in this week’s Economist demonstrates.

The article reviews a book written by by Stan Liebowitz, a professor of economics at the University of Texas at Dallas, “and a long-time sceptic of the view that the Internet changes all the rules…“. And it seems to cover a broad range of issues –

“the exaggerated advantages of Internet retailing over conventional retailing; the false claim that the Internet’s lower costs would give Internet firms bigger profits; the inadequacies of the broadcast-television model of advertising revenues; the poorly understood questions of copyright and digital-rights management. But the crux of the book is two chapters devoted to attacking the theory of lock-in. This was the notion that caused the biggest mistakes – and the area where many economists were most at fault.

The Economist’s author clearly believes in the last notion as the rest of the article is devoted to an explanation thereof. But I don’t. Quite to the contrary, I’d argue that the argument was (and thus is) right and that all the other problems (of business judgement) have been far more serious.

It’s not that economists have been wrong to point out that network effects are crucial for a lot of information based business models and being first to market is thus a crucial element in a business strategy. Likewise, if that is case, there is a possibilty for customer lock-in because of high switching costs which will offset the losses incurred during the roll-out phase when getting to a critical mass of customers was the most important thing. There is nothing wrong with this argument.

Liebowitz’ argument is based on a distinction between weak lock-in and strong lock-in. In the Economist’s words:

As the story was told, Internet lock-in happens largely because of network effects. When the value of a product to consumers increases with the popularity of the product, that is a network effect. (A telephone is worthless if you own the only one; the wider the network, the more useful a phone becomes.) Given strong network effects, a company that gains a big share of the market will be protected from competition from late-movers. Even a plainly better product may fail, because people, much as they may prefer it in itself, will wait for others to buy it first. The implication for business is that moving first is all-important. In refuting this, Mr Liebowitz emphasises the distinction between two kinds of lock-in. The question of compatibility is central to both. One kind of lock-in arises simply because switching to a new product involves a cost beyond the purchase price: costs of learning how to use it, for instance, or the difficulty of using it alongside products you already own. Mr Liebowitz calls this self-incompatibility, or weak lock-in. But there is also strong lock-in. This arises if a new product is incompatible with the choices of other consumers – and if, because of network effects, this external incompatibility reduces the value of the product.

The point is that weak lock-in is very common, indeed pervasive. Many new products have to overcome self-incompatibility. People do not buy a new computer every three months even though the product is improving all the time. Learning to use a new word processor is a bore; for most users, a rival has to be much better, not merely a bit better, to be worth the trouble. Note that if slightly better products are rejected because of self-incompatibility, this is not inefficient: it would be inefficient to buy such a product, incurring all the costs, unless the improvement was big enough to justify it. To repeat, weak lock-in is nothing new.

Strong lock-in is different, because of the network aspect. Strong lock-in means that consumers won’t move to a new and much better product unless a lot of others jump first. If they could somehow agree to move together, they would all be better off. But they cannot. Strong lock-in reflects a failure of co-ordination, it causes economic losses, and in theory it does create opportunities for decisive first-mover advantage. But how common is it, even in the new economy? Mr Liebowitz is forthright on this. Strong lock-in is not merely uncommon, he says, there is actually no known instance.

The lock-in literature leans heavily on just two examples: the persistence of the supposedly inferior QWERTY keyboard (see article) and the triumph of the VHS video standard over the supposedly superior Betamax. Both examples, Mr Liebowitz shows, turn out to be bogus. The QWERTY keyboard is about as fast to use as the most plausible alternatives, and VHS had important non-network advantages over Betamax – notably, longer tapes. Neither case shows strong lock-in.

OK, now let’s see – there’s about a hundred different stories out there concerning the alleged efficiency or non-efficiency of the QWERTY keyboard (which was allegedly designed to reduce typing speed because of technical issues in mechanical typewriters). Pick and choose your preferred one. And VHS? longer tapes than Betamax? From a band-length perspective Siemens’ Video 2000 was clearly the best product. I still have an eight-hour-tape somewhere. VHS is a clear example of network effects, but the explanation is not technical superiority. It was content.

Don’t ask me why but there were much more films available on VHS than on Beta or any other system. So more and more people picked VHS to be able to benefit from that choice. The more people chose it, the more attractive it became to even more late adopters. VHS is a clear example of lock-in. Strong or weak? I don’t think that dichotomic distinction is useful. There are weaker and stronger lock-ins. VHS is an example for a stronger one. Microsoft is an example for a stronger one – one could say, one with a interperson compatibility issue, as opposed to an “intra-person” compatibility issue, as the word processor example used above.

And these aren’t common effects? Think of Amazon.com’s recommendations, a service I have often used. They are based on a system called collaboritve filtering, which relies heavily on a critical mass of consumers. Think of p2p applications – you need a lot of people in such a service to be able to find the stuff you want. The more people find the stuff they want, the more will use the service.

Now switching file sharing applications is not particularly difficult for most users. But there’s a reason only a handful of useful filesharing applications exist at a time, some in niche markets, like Edonkey, where a lot of (pirated) movies are swapped.

You get my point. It’s not the wrong principle. It has been the wrong application which has caused financial desaster in so many cases. Network effects/rising returns and lock-in (in whichever way) are a lot more important for information based businesses than for car manufacturers. They have very limited problems of collective action, most of which have been dealt with in a legal way.

But it is important to emphasise that neither network effects nor customer lock-in are the only conditions for success in the cyber space. For some businesses they may be sufficient, but a useful product or service is still what people pay money for. As long as it is free, a lot of people will consume a lot of stuff. If they have to pay for it, things are different.

But let me say it again. The failures are not about wrong economics. But about their misapplication.

Standard