The End of Science Fiction

I’ve seen various theories put forward as to when the first science fiction stories were written. Depending on your definition of science fiction — and that exact definition can be quite contentious, especially on this blog — the first proper science fiction tale might be Mary Shelley’s Frankenstein (1818) or William Shakespeare’s The Tempest (c. 1610) or maybe Lucian of Samosata’s A True Story (c. the 2nd century AD). Personally, I’d argue that you need to have the scientific method before … Read more

Sony VAIO Bloatware

Pursuant to my earlier post about all the unnecessary crapware pre-loaded on my new Sony VAIO laptop… I found a way to get a list of all the stuff that Sony loaded this thing up with, above and beyond Windows. You just open the Sony Recovery Center, and click the option to reinstall some of your programs and drivers from the recovery disks. Here’s the list. The entries with asterisks (*) are trialware. The bolded … Read more

My New Sony VAIO Laptop

So after flirting with the idea of buying a MacBook Pro for months, I went with Windows.

But I went with Windows in style.

A few days ago, I purchased a brand new Sony VGN-FZ140E notebook computer from the local Circuit City. (Here’s the laptop homepage on Sony’s website.) Circuit City had a deal which was pretty hard to pass up. For the incurably geeky, here are the specs on my new computool:

  • Sony Vaio FZ-140E laptopIntel Core 2 Duo T7100 processor running at 1.8 GHz
  • 15.4-inch widescreen WXGA LCD with reflective coating
  • Intel Graphics Media Accelerator X3100
  • 200 GB hard drive (only runs at 4500 RPM, unfortunately)
  • 2 GB of memory
  • Built-in wireless connectivity to 802.11a/b/g, and even n
  • Built-in webcam and microphone
  • DVD-/+RW drive, which I think has that cool LightScribe labeling thing
  • Slots ‘n jacks ‘n ports up the wazoo
  • Only 5.75 pounds, including battery
  • Windows Vista Home Premium

So why no MacBook Pro? It’s simple: the display for the regular ol’ MacBook is too frickin’ small, and the base model for the MacBook Pro is $2,000 before sales tax and shipping. What did I pay for my Sony? A nice, light $1,200 including sales tax.

And I have to say that this Sony almost matches that Apple cool factor. It’s extremely thin and light, and has this graphite coating that just begs to be caressed. The display is absolutely gorgeous, the brightest and clearest I’ve ever seen. So far, the machine’s been as quiet as a church mouse, it doesn’t heat up unnecessarily during normal use, and the Vista Aero graphics are pretty snappy. I’m not quite used to the keyboard layout yet, but the action is phenomenal — the keys are almost flat, like the MacBook’s, and they don’t clatter loud enough to wake the neighbors.

All in all, this should be powerful enough to do what I intend to do on this laptop. Which is plunk my ass down in a series of Starbucks and write Geosynchron, the third book in the Jump 225 Trilogy. There will be the occasional bit of web contract work on here, but again, I mostly reserve that for my desktop.

I’d gotten used to all kinds of inconveniences with my 2003 vintage Toshiba notebook. The lid doesn’t open and close properly, hibernation doesn’t work, there’s no built-in WiFi, and the thing vents out the bottom, so if you stick it on a cushioned surface it overheats and shuts down. Almost any new laptop I buy would solve those problems, but the Sony VAIO solved problems I didn’t realize I had. Like the fact that all of the ports are exactly where I want them to be, and the power jack includes an L-shaped connector that makes the cord take up less space.

So what are the immediate downsides I see to this machine?

  • The trackpad is a bit smaller than usual, and it’s almost completely flush with the rest of the casing. Seriously, it’s only recessed about a millimeter. This means that half the time I have to slide my finger around for a second or two to actually find the trackpad. It doesn’t help that the trackpad is black with black buttons, so it’s almost completely camouflaged. In low-light situations, you can barely even tell it’s there.
  • The sound is a lot tinnier than I expected. I probably should have gone for the model with the fancy-schmancy Harman-Kardon speakers, but I suppose it’s not really that big of a deal. I listen to most of my music on the desktop anyway, and if I’m going to watch DVDs I’ll be using headphones.
  • No Bluetooth. Which isn’t a tragedy for me, considering that I don’t really have any Bluetooth gadgets. But I was really hoping to start Bluetoothing my office so I can get rid of some of those wires. Guess I can always go buy an expansion card.
  • The integrated video isn’t powerful enough to let me run advanced games, which probably won’t be too much of an issue considering I do the little gaming I do on the desktop PC.

Read more

The Science of Infoquake

Norman Spinrad recently wrote a review in Asimov’s of my novel Infoquake wherein he discussed the scientific accuracy of the book. Mr. Spinrad had this to say:

Asimov's 30th Anniversary issue[W]hether or not such a novel could be considered “hard science fiction”… might be moot if Edelman himself were just blowing rubber science smoke and mirrors. Instead, he is actually trying to make bio/logics and MultiReal seem scientifically credible in the manner of a hard science fiction writer and doing a pretty good job of it, at least when it comes to bio/logics.

Edelman seems to have convincing and convincingly detailed knowledge of the physiology and biochemistry of the human nervous system down to the molecular level. And cares about making his fictional combination of molecular biology and nanotech credible to the point where the hard science credibility of the former makes the questionable nature of the latter seem more credible even to a nanotech skeptic like me.

A week or so later, SF Diplomat took a potshot at the scientific credibility of the book in his smackdown of Spinrad’s piece, saying that though the book is enjoyable enough, “Infoquake is practically fantasy.”

This has led me to give some thought about the scientific credibility of Infoquake and the scientific credibility of science fiction in general. Should the reader care whether my book — or any SF book — has good science?

For the record, my knowledge of science is fairly rotten. I don’t have the foggiest idea what the spleen does, I can’t really tell you anything about Planck’s constant, and I had to put down A Brief History of Time about 40 pages in because I was overwhelmed. As you might imagine, I’m very pleased that Spinrad thought I have “convincingly detailed knowledge of the physiology and biochemistry of the human nervous system down to the molecular level.” Greg Egan and Arthur C. Clarke are probably climbing into graves right now specifically for the purpose of rolling over in them.

But when I started the process of writing Infoquake, my intention was to write a novel about high-tech sales and marketing. It was only supposed to be accurate insomuch as it wasn’t supposed to make people with real scientific knowledge snicker. So I set the book at some undefined time in the future, about a thousand years from now, and I stuck an apocalyptic AI revolt in the interregnum to really wipe the slate clean. Then I made three suppositions:

  1. Give the scientists (virtually) unlimited computing power.
  2. Give the scientists (practically) inexhaustible energy reserves.
  3. Give the scientists a few hundred years to tinker, without all the regulatory, governmental, religious, and socioeconomic chokeholds in place today.

Supposing all that… What kind of world would we end up with?

I started doing my initial research through your typical high-level Encarta searches and the like (Wikipedia wasn’t around then). And I discovered that we’re really, really close on so many “science fictional” technologies already. Teleportation? We’ve got teleportation, believe it or not. (Okay, so it’s only on a quantum level at this point, but why quibble?) Orbital colonies, medical nanobots, virtual reality, and neural manipulation? All possible, based on the evidence we have now.

Read more

Google’s Instant Translation

In case they weren’t working on enough already, Reuters reports that Google is working on the ability to instantly translate documents. BabelFish on steroids, if you will.

Google logoHow would you build such a thing? You might think that Google’s programmers would try to break down languages into sophisticated formulae depicting sentence structure and grammatical rules and that kind of thing. Wrong. If I’m reading the article correctly, Google’s essentially trying to do the whole thing through pattern recognition.

This means that when you feed this sentence into the Google translator —

Nel mezzo del cammin di nostra vita
mi ritrovai per una selva oscura
ché la diritta via era smarrita.

— the program doesn’t particularly care that you’re trying to translate the opening stanzas of Dante’s Inferno. It doesn’t care about the Renaissance or Biblical allusions or Italian grammar. All it cares about is the fact that 76.4% of all Italian documents with this sentence translate it into English the way Henry Wadsworth Longfellow did:

Midway upon the journey of our life
I found myself within a forest dark,
For the straightforward pathway had been lost.

In other words, Google is treating language translation as one big black box. Let the computer figure out the complex algorithms that magically turn Italian into English; all Google cares about is the result. (Of course, you understand that I’m drastically simplifying things here.)

This is more or less the same way that the Google search engine works, and it’s proven to be a major breakthrough. When you type “Britney Spears” into the Google search box, the computer logic behind the scenes has no idea what or who a “Britney Spears” is. All it cares about is that people who are looking for that particular term are very likely looking for the website of the pop star and not some obscure brand of English toothpick. And the more searching and clicking web surfers do, the more aggregate data Google has to fine-tune its searching algorithms. The end result? Well, Google works. It’s a zillion times more effective than any other search engine, and in an astoundingly large percentage of searches you find what you’re looking for.

So how accurate is the Google translator? The Reuters article tactfully states that “the quality is not perfect” and “it is an improvement on previous efforts at machine translation,” which is a nice way of saying it kind of sucks at the moment.

But the good thing about pattern recognition is that it improves dramatically the more patterns you feed it. And, hey hey, wouldn’t ya know it — Google’s right in the middle of scanning the complete collections of a number of libraries around the world. Certainly there must be hundreds of thousands of source documents and their miscellaneous translations in the Google databases now that are just ripe for analysis.

But not only is such a system likely to improve with time — it could theoretically adapt to changes in the language too. For instance, the Google algorithm might notice that words which were once translated as “colored people” and “Negroes” are now being translated as “blacks” and “African-Americans” instead.

Read more

Dave on Ruby on Rails

Imagine you’ve never played the game of football (the American version) before. You’ve never even seen a football game, and you have no idea what the rules are. But somebody tells you it’s way hella cool, and you’ve got the build for it, why don’t you come on down and join the team.

So you suit up and get on the field, but you still don’t have the foggiest idea what’s going on. Sometimes people are running with the ball, sometimes they’re throwing the ball, sometimes they’re kicking it or just pushing other players around and jumping on them for seemingly no reason. You try to ask the other players what’s going on, and they’re perfectly willing to help you — but all you can catch is a few seconds of their time between plays when they’re out of breath.

Ruby on Rails logoThat’s kind of how I feel trying to learn Ruby on Rails.

What the hell is Ruby on Rails? For all you non-technical people out there, it’s a programming environment that’s supposed to make development super, mega easy.

Those with a more technical bent have probably already heard about Ruby on Rails. But for those who haven’t, it’s an open-source web framework where you can use the popular Ruby language to build robust applications using the Model-View-Control pattern in an astonishingly few lines of code.

How easy is it? Well, once you’ve got it installed properly, you literally type “rails book” and then “ruby script/generate scaffold chapter.” In the space of seconds, RoR generates all of the files you need for a project called “book” composed of multiple “chapters.”

From there on out, it’s amazingly simple too. You can describe the data model with two basic statements:

class Book < ActiveRecord::Base
has_many :chapters
end

class Chapter < ActiveRecord::Base
belongs_to :book
end

RoR takes care of generating all of the HTML files needed to make it work on the fly. Within five minutes, you can have an application that will let you seamlessly add, edit, and delete chapters to a book. No more mucking around with granular SQL statements and spending hours debugging.

The problem is, you’ve got to get it installed properly.

And getting Ruby on Rails installed properly is a bitch. It’s taken me days, and I’m still not sure I’ve got it done right. Luckily, you don’t need a web server to serve up the application because RoR comes with a built-in lightweight web server called Webrick. Oh, but wait, Webrick isn’t powerful enough for a production server, so we need Apache. With the FastCGI module installed and configured for Ruby files. Oh, but wait, nobody uses FastCGI to do this anymore, everyone’s using something called Mongrel these days…

Read more

Don’t Worry, Vista Will Handle It

Call me a masochist, but I installed Windows Vista on my home machine this past weekend. I wasn’t about to spend much money to get my rapidly aging Shuttle XPC Vista ready, so I simply opted to buy an $85 ATI Radeon video card that would let me run the Aero interface, however creakily.

The list of apps with Vista compatibility problems is truly mind-boggling. We’re talking about stuff I use every day. Dreamweaver, ColdFusion, Eclipse, iTunes, Irfanview. Add to that the fact that my Photoshop disc is on the fritz and you’ve got a major productivity roadblock. But perhaps the app that I miss the most is one that works in the background: Diskeeper.

Diskeeper is (or was) probably the best defragmenter available for Windows. It’s got a feature called “Set It and Forget It” which allows you to configure the program to defrag your hard drive in the background whenever it sees the need, and then, as advertised, forget all about the damn thing. But the bastards at the Diskeeper Corporation want me to pay $30 to upgrade to their new Vista version, even though I already bought an upgrade less than six months ago. So I decided to look at alternatives. (Update 3/8/07: Never let it be said this blogging thing is a waste of time. I just received an e-mail from a nice fellow at Diskeeper Corp. apologizing for the upgrade confusion and offering to make it up with a coupla extra licenses. Thanks, Diskeeper!)

I opened up the built-in Windows Vista Disk Defragmenter, and I was astounded to see this:

Windows Vista Disk Defragmenter

In case you’re looking at this image and wondering what’s so astounding, the only thing you can configure here is the schedule. No setting priorities, no setting unmovable files, no program menus, no help file, no nothing. I wasn’t expecting a robust interface like Diskeeper’s that allows you granular control over what files get positioned in what place on the hard drive, but I wasn’t quite expecting this either.

Windows Vista is full of these kinds of user interface decisions. Places where the operating system presents you with a limited set of options and tells you, “don’t worry, Windows Vista will handle it.” We’ll defragment your disk for you, we’ll switch color schemes when necessary, we’ll block you from handling the nasty files, we’ll decide when the computer should sleep and when it should wake.

Remind you of anything? It reminds me of a Mac.

Read more

You Are What You Read

Never let it be said that I’m not a sentimental idiot. I had a fun little idea earlier tonight about taking my catalog of books on LibraryThing and doing a photo mosaic out of it. You know, to prove that “you are what you read,” or something jejune like that. So I took a couple of photos of myself and one of the Infoquake book cover, and I done did it.

Here are two of me. Once again, these are mosaics made completely of the book covers from my collection in LibraryThing. Click for the full-sized images (2.52 MB and 2.05 MB, respectively). No, really, click on them, resize them, scroll around, it’s worth it. And don’t forget to expand the image out if your browser does that automatic image resizing thing.

David Louis Edelman mosaic David Louis Edelman Mosaic

And here’s the cover of Infoquake, also composed strictly of my LibraryThing book covers. Again, click for full image (1.44 MB):

Infoquake book cover mosaic

So how did I do it? It’s surprisingly not very difficult at all.

Read more