Ray Kurzweil on Multi Technology

Futurist Ray Kurzweil has suggested in an interview that we will be using a virtual reality network almost exactly like the one I proposed in “Infoquake” as soon as the late 2020s.

State of Technological Dissatisfaction

The human condition is this: we’re restless and dissatisfied, and that drives our constant technological innovation. Which explains why I’m so irritated I can’t sync my Firefox profiles between computers without hassle.

Broken Technology

The RSS feed for this blog seems to have broken when I posted the new design. When I go to my iGoogle page, the last article for this blog is still the entry from April 14, “Infoquake” Reviewed on Fast Forward. Which means there are certainly a number of readers who have no idea that I’ve redesigned the website, and who will just assume I’ve fallen into a crack in the Earth somewhere until they decide to come browsing this way again. This happened the last time I redesigned the site too.

Broken computer monitor in the woodsI’m unclear why this has happened. The URLs for the feeds should still be in the same place. All of the articles that were in the old feed are still in the new feed. I did mess around in the database and fix a number of GUIDs (Globally Unique Identifiers, for those non-geeks in the audience) that were pointing to a temporary address. But that should only have affected your feed reader’s ability to mark the entry as read or not read.

At least you can delete and re-add the RSS feed to your feed reader. The syndication for my Amazon blog broke altogether several months ago, and my message to the Amazon technical support staff seems to have fallen into a crack in the Earth somewhere. Now I’m stuck adding new entries to my Amazon blog by hand.

Why is so much technology so goddamn fragile?

I joke about this all the time with my web programming customers. Chances are that if you see something drastically wrong with the website I’m managing — layout all fucked up, images floating all over the place, everything completely unreadable — it’s the fault of a single misplaced comma somewhere. Other industries don’t have this problem. I mean, if you’ve got a single board nailed crooked in your house, the whole thing doesn’t fall to pieces.

Read more

Will the Novel Die?

I can’t find any current piece of journalism to use as a springboard for asking whether the novel will die. But considering that the question gets asked every 14 seconds somewhere on the blogosphere, I’m not going to worry. Just follow the trail of rent garments and gnashed teeth and you’ll find someone blathering about it. The question’s on my mind this morning, so that’s good enough for me.

Will the novel die? I won’t keep you in suspense: Yes, the novel will die. It might not happen in your lifetime. But yes, I can say unequivocally that the novel will eventually breathe its last and lay down contentedly in the grave of dead art forms. I’ll be very conservative and estimate 50 years.

And you know what? It’s not that big a deal.

Ever since the advent of television, people have predicted the demise of the novel, and other people have smugly sat back and declared that since it hasn’t happened yet, it won’t happen at all. But I think a lot of these defenders of the novel have a fundamental misunderstanding of what a novel is, not to mention a fundamental misconception of its importance.

First off, we have to consider the question of what it means to be a dead medium. A dead medium is simply one which does not produce a significant number of new works of art. When a medium of expression dies, that doesn’t mean that the jackbooted Art Police storm into your house in the middle of the night to burn every instance of it they can find. Life ain’t Fahrenheit 451. If the last novel rolls off the printing press tomorrow at 9 a.m., we’ll still have hundreds of millions of novels lying around to enjoy until they crumble into dust. And unlike, say, the 8-track tape or the HD-DVD, there’s no specialized equipment necessary for reading novels.

Nor do the Art Police threaten anyone with imprisonment who dares to create art in a dead medium. Vinyl is a dead medium for music, and yet there are still people producing vinyl records. Polka is a dead art form, and yet you can still find people not named Weird Al Yankovic creating polka. Given the importance of the novel to Western civilization, I’m sure that printers will continue pumping the things out in special limited editions long after the masses have stopped buying them in mass quantities.

You might think that I’m mixing up the terms medium and form here. The medium of the novel is that 8″ x 12″ hunk of pulped wood, while the form of the novel is the 120,000 words of prose that gets inked onto the surface. But the point I’m trying to make here (as Frank Lloyd Wright and Marshall McLuhan made long before me) is that those two things are inextricably tied together. The medium of the novel is its form.

We haven’t always had novels. No, in fact, while recorded human history has been going on for five thousand years now (depending on how you define it), the novel has been around for less than five hundred (depending on how you define it). Socrates, Plato, and Aristotle never read a single novel in their lives; I don’t think Shakespeare could have read more than a handful of them.

The fact of the matter is that the novel itself is an art form that evolved to take advantage of a certain new technology, namely the printing press. Why do books tend to be no larger than around 8″ x 12″? Because that’s about as large as you can make a book and still be able to hold it comfortably in your hands and transport it from place to place. Why does the print tend to be around a point size of 12? Because that’s about as small as you can make text and still have it be readable at arm’s length. Take those limitations and you’ll find that you can’t easily pack more than 200,000 words into a single novel.

So the novel is, in fact, a device that’s both created by and limited by certain factors of human physiology. These same limitations govern any art form. Ever wonder why most films are less than 180 minutes in length? There are certain issues surrounding the economics of movie theater chains and the technical specs of film projectors, but the real reason is even simpler. 180 minutes is about the amount of time that human beings can comfortably sit and pay attention to a film without having to either eat or hit the bathroom. Tack in an intermission or two and you can extend that timeframe for a while. But until we’ve got gastrointestinal and neurological programming that allows us to drastically extend the amount of time between bathroom breaks and naps, you’re never going to see, say, a 26-hour movie.

Read more

Building the Perfect User Interface (Part 3)

In part 1 of this article, I made a quick and handy definition of user interface: Given technology as a black box, user interface is how you tell the black box what you want it to do. In part 2, I listed some things wrong with the current state of user interface, using Google as a prime example.

So we clearly haven’t yet mastered the science of user interface here in the 21st century. But what is it we’re striving towards? What’s the perfect user interface? In, say, a thousand years, when we have unlimited computing power and unlimited energy (like the characters of my novels Infoquake and MultiReal), what kinds of user interface will we be using?

Apple iMac Let’s take the question one necessary step further: do we really need user interface at all? Or are we evolving toward the point where intelligent tools automatically understand what we’re trying to do? In a thousand years, will the concept of giving commands be obsolete?

Software developers are taking the first tentative steps in that direction now. Apple’s Steve Jobs has always taken that “benevolent dictator” approach: we’ll decide what you, the user, need to handle, and the machine will just automatically handle the rest. Take disk defragmentation, a software task that only the wonkiest of technowonks has any interest in controlling. There isn’t any standard disk defragmenter for Macs, but that’s not because Mac hard disks never need defragmenting. OS X simply does it for you behind the scenes, as this article on the Apple website makes clear.

Microsoft is moving in this direction too. One of the advantages that Windows users have historically held over Mac users is the fact that it’s generally easier to get under the hood and tweak the gears that make the system work. But that’s going away. Not only because OS X has brought command-line tweaking to the Mac, but because Vista is taking away a lot of tweakability from Windows. Disk defragmentation under Vista is a simple on-off proposition; flip it on, and the OS will handle it as needed. Likewise, throughout the operating system, interfaces that were once cluttered with hierarchical menus and interactive dialog boxes are giving way to much smaller lists of context-sensitive tasks. (For more of my thoughts on this, see old blog posts Don’t Worry, Vista Will Handle It and Look Ma… No Program Menus!)

It’s the same long-term trajectory of user interface we’ve seen in automobiles. Look at the user interface for the Model T (pictured, below; original photo, with explanations and more detail, here). Most modern automobiles have reduced this to a standard set of four controls — the gas, the brake, the steering wheel, and the gear shift. It’s not that the car doesn’t still need all those functions, but now the car handles everything itself. It’s not exposed to the end user. If you believe the so-called experts, we’ll all be zipping around in self-driving robot cars within a generation.

Ford Model T ControlsFollow this trend several hundred years, and where does it lead? I talked previously about elevators that automatically know which floor you’re going to via RFID chips in your apartment keys. Why couldn’t that work elsewhere? Maybe you’ll pull into the Starbucks parking lot and find your usual soy milk decaf latte waiting when you get up to the counter. Maybe the refrigerator will automatically order more eggs from the store when you take the last two out. Maybe the polling station will know that you’re a member of the Christian Coalition and have a ballot all queued up with Mike Huckabee’s name checked when you get up to the voting booth.

There’s something very unsettling about these scenarios, and it’s not just the potential privacy hazards. Humans want to be in control of our environment; we instinctively resist environments that control us. Not only that, but we quickly grow bored with environments that coddle us. Humans are designed for dynamism, dissatisfaction, and change; despite the stereotype of modern man as couch potato, as a species we don’t handle stasis well.

So we like to be in control of our surroundings. But how much of this control is just feel-good illusion? When you order a hamburger at Burger King, sure, they’ll make it your way — as long as “your way” only involves their nine predefined toppings. And when you ask for lettuce, you can’t control how much, or whether they use shredded iceberg or delicately layered romaine, or whether it comes from West Virginia or Peru or Ecuador. Burger King’s real slogan should be “Have It Your Way, As Long As Your Way Falls Within the Narrow Parameters of Our Way.”

Read more

Building the Perfect User Interface (Part 2)

(Read Building the Perfect User Interface, Part 1.)

In my first ramble about user interface, I used the toaster as an example of something that is erroneously thought to have a perfect user interface. Perhaps a more apropos example for most techies is the Internet search engine.

Think of any piece of information you’d like to know. Who was the king of France in 1425? What’s the address and occupation of your best friend from junior high school? How many barrels of oil does Venezuela produce every day? Chances are, that piece of information is sitting on one of the trillions of web pages cached in Google’s databases, and it’s accessible from your web browser right this instant.

Google Is a Giant Robot illustrationYou just have to figure out how to get to it — and Google’s job is to bring it to you in as few steps as possible. It’s all a question of interface, and that’s why user interface has been Google’s main preoccupation since day one.

It might seem the model of simplicity to click in a box, type for a search term, and click a button to get your results. But the Google model of searching is still an imperfect process at best. You may not realize it, but there are still a number of Rubegoldbergian obstacles between you and the information you’re trying to get to. For instance:

  1. You need to have an actual machine that can access the Internet, whether it’s a computer or a cell phone or a DVR.
  2. That machine has to be powered and correctly configured, and it relies on hundreds of other machines — routers, satellites, firewalls, network hubs — to be powered and correctly configured too.
  3. You need to know how to log in to one of these machines, fire up a piece of software like a web browser, and find the Google website.
  4. The object of your search has to be easily expressed in words. You can’t put an image or a color or a bar of music into the search box.
  5. Those words have to be in a language that Google currently recognizes and catalogs (and your machine has to be capable of rendering words in that language).
  6. You have to know how to spell those words with some degree of accuracy — which isn’t a problem when searching for “the king of France in 1425,” but can be a real problem if you’re looking for “Kweisi Mfume’s curriculum vitae.”
  7. You need to be able to type at a reasonable speed, which puts you at a disadvantage if you’re one-handed or using imperfect dictation software.
  8. Google has to be able to interpret what category of subject you’re looking for, in order to discern whether you’re trying to find apples, Apple computers, Apple Records, or Fiona Apple.

Some of these barriers between you and your information might seem laughable. But it all seems so easy for you because you’re probably reading this from the ideal environment for Google, i.e. sitting indoors at a desk staring at a computer that you’ve already spent hours and hundreds, if not thousands, of dollars to set up. If you’re running down the street trying to figure out which bus route to take, the barriers to using Google become much steeper. Or if you’re driving in your car, or if you’re a Chinese peasant without access to 3G wireless, or if you’re lounging in the pool, and so on.

Even in the best-case scenario, after you jump through all those hoops, you usually have to scan through at least a page of results from the Google search engine to find the one that contains the information you’re looking for. Google does no interpretation, summarization, or analysis on the data it throws back to you. Some search engines do some preliminary classification of results, or they try to anyway, but it’s generally quite rudimentary. Chances are you’ll need to spend at least a few seconds to a few minutes combing through pages to find one that’s suitable, and then you’ll need to search through that suitable page to find the information you want.

I don’t mean to minimize the achievement of the Google search engine. The fact that I can determine within minutes that a) the king of France in 1425 was Charles VII, b) my best friend from junior high school is currently heading the division of a high-definition audio company in Latin America, and c) in 2004, Venezuela produced 2.4 million barrels of oil a day — this is all pretty frickin’ amazing. But that doesn’t mean we shouldn’t note the search engine’s shortcomings. That doesn’t mean we shouldn’t point out that there are still a zillion ways to improve it. There’s still a huge mountain to climb before we can call Google an example of perfect user interface.

But don’t worry, because Google’s on the case.

Read more

Building the Perfect User Interface (Part 1)

When I set out to create the world for my Jump 225 Trilogy, as I’ve written elsewhere, I started with a few technological principles:

  1. Imagine that we have virtually inexhaustible sources of energy.
  2. Imagine that we have virtually unlimited computing power.
  3. Imagine that enough time has passed to allow the scientists to adequately take advantage of these things.

I discovered that starting from these basic principles, there are almost unlimited possibilities. You can easily have a world that’s intermeshed with virtual reality. You can create vast computational systems that have billions and billions of self-directing software programs. You can have pliable architecture that automatically adjusts to fit the needs of the people using it. And so on. It’s actually fairly easy to figure out a technological solution to just about any problem if you don’t have those constraints.

science-fiction-machine.jpgThe interesting questions in such a world, then, are questions of interface. You don’t bother to discuss if you can accomplish your goal anymore, because the answer is almost always “yes.” You just need to know how you’re going to accomplish it, and who’s going to pay for it, and what happens when your perfectly achievable goal clashes with someone else’s perfectly achievable goal.

In other words: you’re at point A. You’d like to be at point B. How do you go about getting there?

Note that when I’m talking about user interface, I’m not talking about how you actually get from point A to point B. The interesting thing about this whole new science of interface is that it doesn’t really matter. We can treat all kinds of science and engineering as a simple black box and just skip right over it. What I’m really concerned with at the moment is how human beings translate their desires into actions in the physical world. How do you tell the black box you want to go from point A to point B?

It seems like a ridiculously easy question, but turns out it’s not. Let’s just take a very simple example of a black box that we all know: the toaster. You might think we already have the perfect user interface for toasting bread. You stick bread in a toaster. There’s one big lever that turns the sucker on, and a dial that tells you how dark you want the toast. How can you improve on that?

Well, wait just a second — the desire we’re trying to accomplish here is to take ordinary bread and turn it into toast. And if you think of user interface as the way you go about accomplishing this, the user interface for toasting bread is much more complicated than you might think.

You need to buy a machine to do the toasting, and you need to plug that machine into a power socket. (The right kind of socket for your part of the world.) And not only do you need a bulky machine that takes up counter space, but you need a dedicated machine that really does nothing else but toast bread and the very small number of specialty foods designed to fit in toaster slots. If you’re trying to toast bread in my house, you need to know that the toaster and the microwave are plugged into the same outlet, and using them at the same time will blow the fuse. You need to experiment with every new toaster you buy to find exactly the right setting — and yet, chances are that you burn toast at least once every couple months. How inefficient is all that?

Read more

Mini-Essay on the Internet and Publishing on SF Signal

I’ve got a mini-essay (three paragraphs) up today in the new “Mind Meld” feature of SF Signal. The question was about how the Internet has impacted publishing and the author’s ability to sell more books. Quick excerpt: But even more important, the Internet has allowed me to keep in touch with readers during the (too long) break between novels. Before the prevalence of websites and blogs, the only way for newer SF authors to keep … Read more