1) DCMA Encryption and Mass File Sharing
Can you make a prediction?
I'm interested in your prediction for how the next two years will pan out with regard to all the litigation around mass file sharing (see: Napster) and its relationships to DCMA and possible future twists with parties circumventing "protection means" like encryption.
Recent developments have been interesting to follow, but I'm wondering if the furure is going to be getting scarier and more worrisome, or level out and more reasonable... and your contribution to this queston is most welcome.
The next two years are, as usual, harder to predict than the next ten. so I'll start with the easy, long-term prediction and back into the two year outlook.
The Analog Millennium Copyright Act is an attempt to graft analog economics onto digital data by decree. This will fail.
The economics of selling information in an analog world have always relied on a piece of indirection: you pay for the object -- the magazine, book, tape, whatever -- in order to get the information stored in the object. In the digital world, information and storage are decoupled. You can get to information without using objects as intermediaries.
We have seen several industries come under this kind of pressure, where the switch from analog to digital distribution blows up the old business models, and the general trend seems to be a parallel transition from selling products to selling services.
The obvious comparison is with the software industry, which used to rely on selling physical objects -- CDs -- in just the way the recording industry does today. As the internet made redistribution of software easier, the software makers worked themselves up into a froth, launching the usual legal and technological attempts to prevent any change to the object-based sales model -- dongles, UCITA, even ad campaigns asking people to report one another or their employees.
As those rear-guard actions are happening, though, the industry as a whole is moving to a software-as-service model. Even Microsoft, who has arguably benefited more than any other software company from per-unit sales, is talking up its ASP strategy.
You can see this attempt to make digital music behave like objects in digital rights management schemes like SDMI. The recording industry is desperate to bring all the inconvenience of the physical world into cyberspace, and then ask music lovers to pay the same for music on- and offline, because it's costing them so much to make things so inconvenient.
Ten years from now, this nonsense will have failed. Embracing digital data for what it does well will so expand the size of the market, by lowering expensive physical barriers, that the music industry will be making *more* money than it does today.
Two years from now is a different, and muddier, story. The interesting fight today is between the music industry and the recording industry.
The music industry is that part of the business concerned with music, everyone from the artists themselves to the A&R department to ASCAP. The recording industry, on the other hand, is concerned with recording technology and its economic effects, principally per-unit pricing. The avatar of the recording industry is the RIAA; if the RIAA represented the music industry, they'd call it the MIAA.
Like all bureaucratic entities, whatever the RIAA is concerned with on paper, it is mainly concerned with its own survival. Even if new technology allows for an expansion of the music industry, if it does so at the expense of people paying per unit for music, the RIAA will be against it. Anything that looks like a subscription scheme undermines the RIAA's reason for existing, namely representing the people who collect all that revenue derived from linking music (the information) to objects (the physical CD). If that link disappears, the RIAA disappears, and groups like ASCAP and BMI, or some altogether new entity, will step in to take the RIAA's place.
This will not happen in the next two years, however, so what we will have in 2003 will look a lot like the software industry did in 1997, where dongles existed side by side with early experiments in software-as-service, and Egghead's closing of its brick and mortar stores was still in the future.
As we know from both democracy and free markets, systems that channel selfish behavior work better than systems that try to deflect it. I put Napster third on a list of uprisings of massive, uncoordinated civil disobedience in the last 100 years, after the 55 mph speed limit and Prohibition. The most important thing for users to do, as we weather the transformation of one industry after another in the next decade, is to consistently and loudly insist that the economic efficiencies offered by digital data and networks never be sacrificed to preserve outmoded industry practices.
2) Does New Media need journalism training?
Does the New Media need journalism training?
In the bad old days, journalists almost always got some training before they were unleashed on the public. Boring things like finding out a complete story, verifying rumors before publishing, disclosing conflicts of interest, journalist integrity, and a whole host of other things (including spelling and grammar).
Due to the rise of New Media, anyone with a web page can be a journalist, regardless of their qualifications. The Drudge Report is the canonical example. Matt Drudge can post any rumor he hears, without having to verify it.
Now, I'm not sure I want to go back to the old way of journalism, but should New Media editors at least try to follow some basic journalist ethics and principles?
If they should, how should they try to implement them?
I have two answers as to whether we need better journalistic standards on the net: "No", and "Yes".
The "No" answer looks like this: the democratization of opinion on the net is easily the most important thing to happen to journalism since TV, and the most positive thing to happen to journalism since radio. If I could wave a magic wand and get all of that plus higher standards, I might do it.
However, magic wands are scarce in my corner of the internet, and the only non-magic ways I can think of to effect such a thing would require the creation of a journalistic standards body, equipped with the ability to make value judgments and the right to determine who is in and out of the club of acceptable standards. To be effective, this would have to be coupled with some sort of punishment for those who break these new rules, even if only by withholding the "Online Journalistic Seal of Approval".
The damage this would do to the present flowering of online journalism and opinion would be incalculable. It is precisely the fact that no third party -- no printer, no post office, not even the FCC -- needs to be involved in a public conversation between reader and writer that makes the internet's effect on media so important.
This huge expansion of the reach of the individual has the side effect (how could it be any other way?) of extending *both* tails of the bell curve -- in addition to the obvious increase in the bulge of mediocre writing, we get more dreck. As a consolation prize, though, we get the kind of thing you could not get in any other medium -- blogger, mcsweeneys.net, oldmanmurray.com, Slashdot itself.
These things are not unconnected. Slashdot and the Drudge Report are twin sons of different mothers. Taco and Drudge showed up at about the same time, and there is no objective standard by which one of them should have the "right" to run a web site but not the other.
And this brings me to the "Yes" answer. Yes, we need higher standards, but if we are to get there without coercion and official designations (read: governmental regulation), it has to come through pressure exerted by the audience themselves.
Your complaint about Drudge rings a bit hollow to me. Drudge is surely still peddling whatever he gets, but he is no longer in the public eye. His moment of fame came when the information he had was both accurate and of vital interest. Now, though, he has been largely sidelined, because his political views are more important to him than his journalistic probity -- the interesting stories he has are not true, and the true stories he has are not interesting.
Furthermore, if he ever *does* get another interesting story, we'll tune right back in. The public is better at sorting the good from the bad than you think.
That Drudge *wasn't* able to parlay l'affaire Lewinsky into a permanent place on the national stage seems to me to be an argument for continuing to let the users sort things out for themselves, rather than assuming they are too stupid and jumping into the drivers seat on their behalf. (But see question 4 for an argument about ways of organizing user input to maximize its force.)
3) Computers and humans.
The way that people interact and exchange information over the internet has been one of your favourite topics.
What do you think the "information age" is doing to humans regarding their ability to socialize and interact. With the advent of television in the 1950's, there was criticism that television eroded communities by keeping people in their homes. Right now, the so called "MTV Generation" allegedly has the attention span of a 30 second soundbyte.
Many phenomena have been cited as a result of this. Some believe that because so much time is spent in front of televisions, alone, the population is segregated and isolated, unable to work as a community. Others would argue that television technology has merely expanded the community to a national or international level. Still others would refute that this monoculture is dangerous and allows our cultural identity to stagnate.
In the late 90's and now in the early parts of the "new millenium" we've seen an increasing amount of information being transacted over the internet. Does the web as we know it enhance our ability to communicate, or does it further isolate us?
Does a more distributed, decentralized peer2peer model of information exchange promise a type of interaction more natural to humans, or should we be for strategies to prevent further information glut and saturation?
Oh my. Big questions.
Computer-mediated communication is a vast area of inquiry, and not one I cover closely, so rather than trying to talk about the field from a professional point of view I'm going to answer this one based on personal experience. Take it with a grain of salt.
In 1993, during what I can now thankfully call the nadir of my personal and professional life, I discovered the internet, and essentially disappeared into it. For someone who had spent a lot of time thinking about language and community, the net was like a gift, having something this interesting to think about. At a time when I was living an otherwise flattened existence, the daily challenge of trying to understand the net gave me something to live for.
None of my friends at the time was online, so I made a second set of friends, mainly on panix, ECHO, and alt.folklore.urban. During those years, I essentially lived in two worlds, with the networked world seeming realer to me than the real world. I often had a daydream in which the hum of the internet just grazed the top of my skull; it felt as if by standing on tip-toe I should be able to press my brain directly into the network. Going 24 hours without jacking in made me physically ill.
Therefore, when I face questions like "Does the web as we know it enhance our ability to communicate, or does it further isolate us?", I have to ask, why pick? It seems to do both, or at least it did for me.
I have no trouble saying the internet saved my life, but I also have no trouble saying that I was for a time addicted to it. (The addiction passed like a fever. I awoke one morning a few years ago, and *didn't need to check my email*, a craving whose absence rattled me a bit, as getting online with the first crack of consciousness had for years been a more reliable feature of my mornings than either eating or getting dressed.)
I was addicted to communication, to a very peculiar kind of social congress that put a huge premium on verbal acuity while conveying none of the emotional cues you pick up from other people when you are in the same physical space with them. Anyone who has ever gotten to know someone in email and later met them in the real world understands this difference instinctively: you read someone's email differently after you've spent some time with them offline as well.
So the web can paradoxically enhance our ability to communicate *and* further isolate us. The real danger, it seems to me, is in believing that it can only do one or the other.
As for decentralization, David Stutz says that peer-to-peer is succeeding because it maps better to the way people actually live their lives than client-server does. I do think that p2p strategies will open up new kinds of communications that don't readily map to the network we have today, especially as it concerns group (as opposed to global) publishing.
Again, though, peer-to-peer won't save us from needing strategies for dealing with info-saturation; if anything, the ability to commit more of our thoughts to the web will *raise* the pressure to find novel ways of filtering the important from the trivial and the interesting from the dull.
4) Do we hold successful New Media outlets to higher
by typical geek
Do we hold New Media outlets that have made it (millions of page views per month) to higher ethical standards than someone running a homepage on an ISP dial up account?
There seems to be an attitude at some New Media outlets of Hey, it's my site and I'm doing what I want with it!
Now, I can understand this attitude if it's a part time site, with maybe 20 page view of day. But when you grow into a leading New Media outlet with 30 million page views a month, shouldn't this attitude change?
William Randolph Hearst was accused of starting the Spanish American war to increase circulation for his newspaper. This was rightly decried, you can safely stand on a street corner and advocate war, but when you have a bully pulpit of millions of readers, you should be expected to have more accountability and responsibility. Sadly, I'm not always seeing this on New Media outlets.
Should New Media outlets be more aware of journalistic integrity. Now, at Slashdot Rob Malda almost always let's us know about his VA LInux holding when he writes about VA Linux. He also posts stories about VA Linux's financial problems, to his credit. Should it be a policy that any New Media editor mention all conflicts of interest? Do they realize that with the ease of transferring cash, and the ease of faek indentities, New Media editors need to be cleaner than Caeser's wife.
Right now, journalistic standards are enforced by cumulative effect individual users choices of which sites to frequent and which to avoid (vis the case of Slashdot and the Drudge report I referred to in the second question.)
It seems to me that in the case of the VA stock, we are not only holding Slashdot to higher standards because it's popular; Slashdot became popular in part because Rob, Jeff, and Co are forthcoming about their points of view. Slashdot's popularity may be both cause and effect of the editors' ethical standards, in other words, because reputation is the best way to keep readers. (Thanks to Lucas Gonze for pointing out the relationship between reputation management and publishing.)
When you ask "should it be a policy that...", though, I assume that this approach strikes you as unsatisfactory. My question about such a policy is this: created by whom? Policed by whom? Can you think of any group you'd trust with the power to make determinations concerning who is and is not cleaner than Caesar's wife? Can you even think of any group you'd trust with the responsibility of defining which sites were "news outlets" and which were merely "journals of opinion"?
The only group that I would trust to police Slashdot is the Slashdot users themselves. At the risk of being labeled a fanboy, I'd like to point to the History section of the moderator guidelines, which walks through the "Who will guard the guardians?" problem of moderation, going from no moderation, through hand-picked few, through "400 Lucky Winners" (which for me conjures up a curious mix of Lady Astor and the Charge of the Light Brigade) through to the present system.
When you create a small group whose role is to be representative of the whole, the smaller group often starts acting on its own behalf rather than for the public good. This is a well understood political problem, and representative democracy plus checks and balances has been the usual solution, because direct involvement of all the affected individuals has been too hard to arrange.
On the net, however, you can actually have a small group represent the larger whole, without making the members of the small group into a special interest, by rotating the membership of the smaller group in and out of the larger pool.
This is how juries work, this is how Slashdot works, this is even how collaborative filtering works, and it is a solution that is generally available on the internet, because the death of distance means we can create ad hoc groups to monitor behavior with little regard to geography.
Any discussion of addressing ethical lapses in the media will turn on specifics, of course -- there are different ethical issues for entertainmentweekly.com and schwab.com -- but in general, the most net-like solution for raising ethical standards is going to be to bypass the notion of third parties and instead to find ways of putting the users' hands directly on the dial.
In your article "The Case Against Micropayments" you state the case against micropayments. Has anything in the intervening time changed your mind (i.e. the collapse of content), or do you believe that the fundamentals of micropayments are impossible to achieve? Does your problem with micropayments stem primarily with pay-per-view, or rather the concept of mandatorily user supported sites (i.e. extrapolating micropayments to include subscriptions or content packs)?
The "collapse of content" hasn't changed my mind, because my critique of micropayments has nothing to do with technological barriers and everything to do with human barriers.
In a competitive market, things rarely happen because they are good for producers but not consumers, and this is something micropayment proponents have rarely understood. Many of the arguments in favor of micropayments argue that they will succeed because they will be good for information providers, even though it is the consumers who have all the money. This "Good for the producers" argument is especially hard to support in an environment where the competition is one click away.
So yes, I believe that the fundamentals of micropayments, namely being embraced by users, are impossible to achieve. Do this thought experiment: Assume there are two comparable sources of financial news, one that charges a penny a page, and one that charges a subscription. Other things being equal, which system would _you_ use?
Micropayments could only work in a system where the producers have monopoly control. In a competitive environment, user preference for predictable pricing and a desire to be spared the anxiety of the meter ticking will always make micropayments vulnerable to competition by alternate pricing schemes.
As for pay-per-view vs mandatory, one of the problems with wading into the micropayment tar pit is that there is no one definition of micropayments that covers all uses of that word. My working definition is two-fold:
A micropayment is any system that
a) assumes that some alternative to the present financial infrastructure is required, and
b) assumes that people will be content to participate in systems that generate automatic or all but automatic charges.
I specifically *don't* think QPass and Paypal are micropayments, because they
a) use existing credit card infrastructure rather than creating alternative currency or transactions and
b) they interrupt the flow of the transaction in order to get your explicit approval.
I can use QPass to buy an article on the NY Times for $2.50, but I can also buy a used copy of "Hop On Pop" on Amazon for $2.00, and the Amazon interface is *less* intrusive than Qpass. Therefore, if Qpass is a micropayment system then so is one-click ordering, at which point I have no idea what a micropayment really is.
I think the real death of "micropayments" will be the increased flexibility of traditional financial institutions. The W3C "embedded micropayment" effort is completely stalled, QPass has lowered the price threshold for making purchase decisions into the single dollar range, and yet again (as with debit cards vs digicash "smart" cards in the mid-90s) the existing financial infrastructure will prove adequately flexible to handle charges for any size users are interested in transacting in.
Since it is the users' unwillingness to transact in penny and sub-penny amounts, this means that, also like smart cards, there won't be any real-world cases to drive micropayment adoption by consumers.
6) Internet Civilization
I think that it is well agreed that the Internet is changing civilization. The slightly tongue in cheek, but serious question is "Changing it into what?"
The spectrum of possible futures ranges from Utopian to paranoia making 1984 look like a children's tea party. And Idealism aside, there is a large class of people who like being sheeple, having all the tough decisions made for them.
So that is the question - what is the Internet changing civilization into?
With the usual hand-waving about how a question this large invites over-broad speculation etc etc, I think the internet is driving the increasing importance of "immaterial culture" that will mostly operate alongside but in some cases displace the material culture we are used to.
Material culture is the present tense version of physical anthropology, where the question "How do we live?" is answered by examining the material facts of our lives: our clothes, food, houses, dwellings, decorations, and so on.
As the internet permeates society, examination of material culture is increasingly inadequate to answer those questions.
By way of example, I am writing this on my wife's computer. The material facts of using this computer -- desk, chair, keyboard, monitor -- are therefore the same for both of us.
The immaterial facts of our use of this computer could not be more different, however. It's safe to say that other than Yahoo, there is no site we both visit. She never opens telnet, I never open WordPerfect. Our experiences on this machine diverge in a way that examination of the physical set up would never reveal.
With the internet, a computer is a door rather than a box, and the worlds it is a door into -- Barney fan sites, auctions of excess steel, political dissidence, chemistry homework -- have to do with the will and interests of the individuals using it, not with the material aspects of the object itself. We are increasingly less bounded by the choices the material culture is offering us, and increasingly expressing our humanity through immaterial choices instead.
I first became aware of the way the human condition could attach itself to non-existant objects when a friend of mine on panix talked the sysadmins into changing her user id in /etc/passwd so she could have a lower number, because uid had become a status symbol, and the lower the number the more pioneering a panixian you were. (Compare karma whoring.)
This change in the direction of immaterial culture is going to catch a lot of people by surprise. I once did an interview with Bazaar about the internet's effect on fashion (an absurd occurrence, given my relaxed sartorial sensibilities), and I began talking about how the public faces we were fashioning for ourselves were increasingly created online. The editor had little use for this line of thought, since, for her readers, fashion == clothes.
But increasingly, fashion != clothes. In the blogger community, people put the effort into designing an interface that fits their public persona that an earlier generation might have put into picking a wardrobe for the same reason. When looking for work, you will probably spend much more time polishing your personal web site than dressing for the interview. When corresponding with someone you're trying to impress, editing and re-editing email takes the place of changing outfits three times.
So culture is increasingly vested in the immaterial choices we make about our lives -- when everyone has access to the network, what you *do* with that access becomes a more powerful act of self-definition than the choices you make about your material culture and your immediate surroundings.
7) Hi, I'm one of those Seattle protesters.
After reading your piece on the WTO, I have a question for you.
What do you think of the Indymedia phenomenon?
Or, more broadly, do you feel that the increasing accessibility of digital cameras and other tools, which lower the cost of putting a strong Web-based newsroom together, might challenge the increasingly corporate system of mainstream news?
Interestingly, you don't mention Indymedia in that article, but we're a collective of people who gets equipment out to intereted people, to cover the protests on the inside.
They have connected live, streaming news about protests all over the world, including the recent UN climate talks, the WTO, the World Economic Forum, and the march of the Zapatistas to Mexico City.
Although Indymedia started in Seattle, there are IMC bureaus all over the world now.
I think they've done two important things- popularized the "movement against corporate globalization," and created a forum for debate.
The debate you talk about- between the protesters who want to fix institutions like the WTO and the ones who want to abolish them- is taking place in the discussion rooms of Indymedia. Check it out!
I didn't know about Indymedia when I wrote that article, though since then I've spent some time talking with Craig Hymson about it.
Indymedia is a cool thing, and I am obviously in favor of anything that further decreases barriers to disseminating news and opinion.
That having been said, I disagree with the idea that there is an increasing corporatization of news. I refuse to be seduced by this comfortingly alarmist view, because I am old enough to know better.
In 1970, there were three sources of televised news in the US.
No points for guessing what language they broadcast in.
In the US today, there are 7 broadcast networks, someplace between half a dozen and several dozen cable news networks depending on how you define news (MTV covers presidential elections; religious channels discuss politics), not to mention Spanish cable channels, Arabic cable channels, Japanese cable channels, Korean cable channels.
All this without even mentioning the Web. Do you have any idea how many news outlets there are on the Web? If you want to see evidence that news is not in fact becoming increasingly corporate, take a look around the site you're asking this on. As they used to say on the Palmolive commercials, you're soaking in it.
That part of the left given to conspiracy theories has been banging on for 30 years about the contraction in the media space while out here in the real world, technology has been tearing the roof off the sucker since the launch of CNN. I lived through the 70s, and no matter how earnestly The Nation approaches the task of drawing all those little charts of media ownership, nothing can make me pretend that things are worse now than they were then.
To take but one example, 30 years ago there was no Spanish language news broadcast in New York City. None. I don't know how old you are, but if you are under 30 I doubt you can even imagine such a thing.
That's what it used to be like. The 70s were the absolute worst period of corporatization of news -- the old "news as a public trust" idea had died, but the lowering of barriers that would democratize media beyond all recognition had not yet begun.
There are certainly lots of corporate media outlets, and there is lots of cross-media ownership and more on the way, but the democratization of media outlets and media access is outstripping the corporate growth by leaps and bounds. Anyone fretting about a contracting media universe while Starbucks is getting ready to put 802.11b networks in its stores just looks like they have no grasp of history. The media landscape we have now is so unbelievably much less corporate than it used to be that it defies description.
8) Long-term solution to content reward needed
by Tony Shepps
I agree with you that micropayments are not coming any time soon. But I worry that the net is not accurately communicating its need for quality content -- and its willingness to pay for same.
Amongst any group of users, my bet is that you'll find several who would pay for improvements in the quality and nature of the information they receive. Obviously there is great value in correct and timely information. In some cases, it is nothing short of a life or death matter. In most cases it simply keeps us a little better informed.
I don't understand, therefore, why none of your proposed solutions (aggregation, subscription, subsidy) have evolved yet. Every site that I've seen try subscription has given up (except one: the WSJ). And everyone agrees that subsidy in the form of advertising is not going to fly.
Many high-quality sites that deserve to survive are having a tough time of it, and it's not for lack of readership. The Onion hasn't created any multi-millionaires; it should have. Salon has had layoffs. The Straight Dope should make more money on its website than on its books. User Friendly should not have to resort to dead tree publishing or syndication.
In short, while Fucked Company celebrates the death of the crappy sites and stupid business plans, the quality sites are in danger of dying as well. What's gone wrong? Why haven't any models come about that support what people really want?
As a matter of discipline, I try not to use the word "should" in my writing, as it entices me to think like a cop or a priest instead of an analyst, so I can't really either agree or disagree with you about whether the Onion "should" have created any multi-millionaires, or which sites "deserve" to survive.
From my point of view, there are lots of sites using both aggregation and subsidy. UGO and Andover, to name but two, aggregate sites, sites that themselves often aggregate content from different sources, and then subsidize those sites with advertising. The one method of the three that is not in wide use is subscription, for all the economic difficulties presented by digital data I mentioned in the first question.
But by far the biggest single effect of the net on content is *user* subsidy, which is to say amateur subsidy. People who run web logs, people who run mailing lists, even people who submit stories or posts to slashdot or plastic, are creating subsidized content by participating for the love of the thing (the literal sense of amateur) rather than for money.
We have just lived through a period in which, by lowering the barriers to creating a media outlet, it was assumed that we were witnessing the mass professionalization of media.
But mass professionalization is an oxymoron. The mistake I think we've made is to assume that we need to find ways of increasing revenues so we can all go pro. What we are witnessing is the mass amateurization of media, because the net has revolutionized media in the other direction, reducing the cost of being a media outlet to the point where many *many* more people can participate, and with peer-to-peer models offloading even more of the costs to the edges of the network (viz Napster), the lowering of the barriers still has a ways to go.
The unfortunate fact of this lowering of barriers is that an increase in amateur participation puts further pressure on online media outlets hoping to go pro. The collapse of the content sites is just beginning, and there is no short-term fix for that. Advertising revenue will eventually be able to support good sites, but it will take quite some time before both the models and the demand are in place.
You ask "Why haven't any models come about that support what people really want?" My answer is that what people really want is high quality content for free, and for a half-dozen years, the net has been incredibly good at delivering on that desire. Now that the "stock price as business model" plan has failed, most of the sites built in that era will disappear.
In the next 6 months or so, the only professional sites that will be able to survive will either find some sort of patronage (including NPR/PBS-style subsidy from users, as Evan did when he needed servers for blogger) or get bought by a company that is willing to run a media outlet as a loss leader.
Some of the ones that can't stay professional will go back to being labors of love, as just happened to Old Man Murray.
The rest -- most -- will die.
I wish I believed there was some magic bullet. I don't. As John Maynard Keynes said, "The market can remain irrational longer than you can remain solvent."
9) Evolution and Good Usability
In your open letter to Jakob Nielsen you say that evolution must decide what's good design. I agree that you can't force people into good design, but is the evolution doing us any good so far?
For example, I can read web pages on a normal mobile phone. I like that, it's my own little hack, it's far from good, but it can do the job. Now, if usable web sites had been designed, we would all be able to read web on our mobile phones. Another example: Speech browsers. I'd like a box to plug into my hifi, and I want to relax in my best chair talking to my browser, having it read pages for me, playing music, etc. Both these things have been possible for years, but they require good, usable pages.
It seems to me that the evolution isn't the fastest method of getting good design, mainly because people don't know what they never see, people like to have web on their phones, but they don't know that all that is needed is for web designers to do their job properly, and so there is no evolutionary pressure for designers to do their job properly.
OK, so to the question, how do you want to create this pressure?
You are absolutely right, evolution isn't the fastest method of getting good design. I am uninterested in good design, however. I really only care about great design, and about the environment of experimentation that that requires.
The question I have for you is "Is good design good enough?" Because if it's great design you're after, evolution is the way to get there. I don't mean this as a rhetorical question, either. For many people, Nielsen among them, raising the minimum quality of design is the essential task, even if it means curtailing the freedom that makes for the most interesting experiments. So the question you have to ask yourself is: Would I accept higher average quality even if it meant less experimentation?
For me, the answer is No. Freedom is freedom, and the only way to prevent people from making bad stuff is to come up with some a priori definition of "good", and then enforce that. Not only do I not trust myself to be smart enough to know in advance what "good" looks like, I don't trust *anyone* to be that smart. Instead, I trust myself to know what's right for me: the sites I like go in the bookmark list, and the ones I don't, don't. I feel no need to punish people who do work I don't like in any way other than not looking at their work.
The wireless example you bring up is a perfect illustration. By and large, people *don't* like to have the Web on their phones. "Fewer than one in 50 UK adults are using mobile phones to access internet services despite millions of pounds spent on advertising and subsidising handsets, according to new research." http://globalarchive.ft.com/globalarchive/articles.html?id=001109003316
Web sites are written for a user experience quite different from that offered by wireless devices, and some work has to go into modifying or converting content to make it make sense on the phone. To make this work worthwhile, you need both users and site owners to be motivated.
The only place anything like a wireless Web is taking off is in Japan, where DoCoMo quite sensibly made it possible to write for the phone interface in a sub-set of HTML, rather than in WML. There, there is an absolute explosion of third-party content, because designers are motivated to get onto the devices. Where people are using their phones for the Web, and where designers have low barriers to entry to to access to those people, evolution is working -- there is competition and rising quality. Where people are not using their phones, and where designers can't easily create or offer new services, there is no competition and no change in quality. There is also no way to force designers to design for a format they have no interest in.
And this, I think, is the core of my argument. The explosion of the Web was created by the adoption of HTML by amateurs -- anyone can write a web page (and frequently does). You cannot simultaneously have mass adoption and rigor, as evidenced by the failure of all attempts to make "a programming language so simple anyone can do it", or the collapse of the "HTML should be a purely semantic language" argument in 1995.
In large, unmanaged systems, only efforts that achieve partial results when partially implemented can survive, which mitigates against top-down approaches, so when you say "...there is no evolutionary pressure for designers to do their job properly", I have to ask: Who gets to say what is proper? You? Jakob Nielsen? Some newly minted Pope of Proper Design?
You are right that people don't know what they never see, but this is not their problem -- all web designers were first web users. Why should designers design for wireless if no one is offering them a good wireless experience as users?
If you want to know why designers aren't spending the cycles to convert or redesign for wireless, go ask Nokia and Sprint why they adopted WAP. As DoCoMo has shown us, when you make a good experience for the user, the designers follow.
10) An open garden?
The big players in the interactive television game are all building their own walled gardens. What kind of effort would it take for interactive television to evolve into a more web-like open garden model? Which new media players could fight the traditional broadcasters for a place on the screen in the living room?
I once saw the tail end of this fairly terrible WWII movie in which a demolition expert promises to blow up a dam using a tiny amount of explosive, and when the explosion goes off and the dam doesn't fall, his colleagues look at him, crestfallen, and he says "Give it time. It'll blow." There's a cutaway shot to water trickling through the crack caused by the little explosion, which slowly turns into a gush, then a flood, and then the dam suddenly collapses.
Email is that tiny bit of explosive.
I have seen many (*many*) companies have the intuition that the internet is a great thing with only two teensy-weensy problems: they don't own it, and it's too easy for their competitors to use it. They then all have the same brainstorm: "Hey, I know! Lets build something that's just like the internet, except we'll control the whole thing! Then we'll be rich!"
The first time I saw anybody try this was Prodigy in the early 90s, when they were trying to force their users to stop emailing one another and get back to shopping. The most recent attempt was WAP, with stops along the way for AOL, Compuserve, eWorld, MSN, et al, and the thing they have always run aground on was email. People, alas for one-way media outlets, like to communicate with one another. A lot.
Once email arrives, people have very little patience for walled gardens, and less for mapping arbitrary technological distinctions onto live human relationships. "What do you mean I can't send a message to my mother because she has a Sony Interactive TV and I have a Panasonic?!"
Email is the gateway drug of the internet, because once email is in place, people begin to expect full interoperability. In the Olden Tymes (pre-IMG tag, roughly) AOL, Compuserve, and their cousins spent a lot of time patiently explaining to their paying customers why they couldn't use ftp servers or post to usenet, even as 19 years olds procrastinating on their CS homework were freaking everybody out by writing emailftp and emailnntp gateways in their spare time.
If this is the sort of fight you like to tune into, I can recommend no business battle more entertaining than the ongoing marginalization of WAP. The brokenness of the WAP protocol came about because the telcos were damned if they were going to allow anything like interoperability or freedom to weaken their hammerlock on paying customers, so they deluded themselves into believing that what users were crying out for was expensive new ways to get headline news.
In fact, in a news flash that seems to have caught the entire telecommunications industry by surprise, people who buy mobile phones often like to communicate with one another. Had this not been such an absolutely unpredictable occurrence, maybe somebody at the WAP consortium could have predicted that when you add text to the phone, users might like to communicate with one another via text.
Access to email is the #1 feature customers want in a wireless text device (duh), and all those wireless auctions where the telcos spent 22 gajillion Zlotys to own the customer now look like a giant shell game, because the users don't want to get headline news. They want to talk to one another, and they will switch carriers until they are allowed to. Email is the thin end of the interoperability wedge, and this will be true of interactive TV as well.
The biggest risk to this rosy scenario is government tolerance of monopolies. Conventional wisdom in the 70s was that cable access was only going to be possible if the cable companies were given local monopolies, since that yucky old competition would keep people from investing in infrastructure. The normal split between carrier and content owner was thus eroded in this situation, and we have the legacy of that decision to this day.
In countries where there are only one or two providers of interactive television, therefore, there might not be enough competition to force owners of interactive TV services to offer full, interoperable email. This is analagous to the situation in countries where the national monopoly telcos kept per-minute pricing or per-byte charges for internet access, because in a monopoly, the overwhelming customer preference for flat-rate pricing has no way to make itself known. In those places (keep an eye especially on the UK), email might not be enough to force interoperability on interactive TV, at least in the short term.
11) Worthwhile to scratch and start over?
I read your "DNS System is Coming Apart At the Seams" article with pointed interest. It is a topic I frequently harp on in private conversations -- the lack of a human-focused network and network protocols.
I've puzzled over the implications myself, but I'd be interested to hear your opinion -- Is it worthwhile to simply scratch what we have and begin anew, basing the new decisions made on more current assumptions?
For example, hardware is cheap and reliable (as compared to 20 years ago), bandwidth is cheap and getting cheaper. Should the networking protocols reflect this new reality?
(I "author modded" this one up. [g /]
From my point of view, we've *already* scratched what we had and started anew, and that happened when ICQ launched.
ICQ was the first group that I know of to figure out that if DNS wasn't allowing for unpredictable connectivity and unstable IP addresses, then the solution was *not* to wait for fiber-to-the-curb and IPv6, the solution was to bypass DNS, so they simply provided alternate namespaces built on top of IPv4.
whois is a decade and a half old and shows ~25M addresses. Napster is going on two and has twice that. ICQ and other IM services are 4 years old and have something like *6* times that many addresses. Non-DNS addresses are growing far faster than DNS addresses, in part because DNS was never modified to take into account the new reality of PC connectivity and impermanent IP addresses, and partly because setting up a domain name is a huge huge hassle compared to getting a fixed, human readable name with Napster.
So DNS is no longer the only game in town, and never will be again.
If by network protocols you mean redoing IP, however, it'll never happen. For reasons I listed above, I have no professional opinion about whether such a thing "should" happen, but I know it never will happen. It'll be enough to get IPv6 rolled out over the next five years.