Recently I was looking for an alternative to Javascript and a friend of mine recommended Dart, a new programming language announced last year by Google. At first I was very keen to give it a try until I saw a post at Stackoverflow showing the output of Dart to Javascript compiler - it produced 730 lines of Javascript code out of 3-line Dart "Hello world" source.
For me it's just pure madness. Writing code in a scripting language, which than gets translated to a 243 times bigger code in another scripting language, which than gets run by a Javascript engine embedded in a web browser working on top of an operating system... Dart official page strongly emphasizes that it increases programming productivity. Actually I can't figure out how adding more and more layers of software on top of existing ones can increase productivity. And I don't mean logic layers within one application or framework, but using one technology on top of another.
In fact, it's just the opposite. Tracking down non-trivial bugs becomes more problematic, because their source can be located deep below in one of existing layers. Also spawning software layers makes its overall performance drop dramatically. I remember laughing out loud back in 1997 when I read in PC Magazine that a 200 MHz Intel Pentium CPU with 32 MB RAM is recommended to surf the web comfortably, while I could do just fine on Intel Pentium 133 with 16 MB RAM. Now such computer wouldn't probably even boot any modern operating system, not to mention that I can barely open more than a few tabs with Firefox 13 on a computer with 2-core 1.6 GHz CPU and 4 GB RAM. Insane rush to release software more often and more quickly makes it require more and more efficient hardware. In the long rong run, the company may pay less to the developers (because they will produce new applications faster), but it will pay more for hardware infrastructure. Latest power outages in India, one of the world's largest IT centers, clearly show that "productivity" comes at a cost, and that this cost can be huge. To make things worse, hardware performance cannot be increased infinitely: CPU speed became more or less stale since 2005 (further increase in clock frequency became impossible due to amount of heat it produces) and now the processing power is being increased by adding more and more cores. However, there are many algorithms which cannot be parallelized, so they will run equally fast (or slow) on a one-core as well as on a multi-core CPU. Modern processors provide many mechanisms which are designed to help programs run faster, like L1 and L2 cache, but what benefit do you get from a few megabytes of cache if your application requires a virtual machine which does not event fit into the cache itself, not to mention that it needs to be shared among all running applications?
Many of my fellow programmers say that "it has to be like that". No, it hasn't. A computer which helped Apollo 11 land on the Moon was a 2 MHz 16-bit unit, something that modern programmers cannot even imagine. I remember when Kevlin Henney showed at GeeCON a complete chess software written in 672 bytes of 8-bit assembly code someone from the audience screamed: "But that's impossible!". And yet, you can find a web server running on bare Commodore64 with 1 MHz CPU and 64 KB RAM - hell, there is even Unix clone dedicated for it! Another example is a quadcopter controlled by a 16 MHz ATmega 8-bit CPU with 32 KB Flash RAM and 2 KB SRAM (it's based on modified Harvard architecture). If it was programmed by an average software developer, it would probably require the latest Intel Pentium hardware and of course would be much too heavy to fly, even if it run a single floppy MenuetOS...
1 comment:
I love the story described in Steve Jobs biography by Walter Isaacson:
One of the things that bothered Steve Jobs the most was the time that it took to boot when the Mac was first powered on. It could take a couple of minutes, or even more, to test memory, initialize the operating system, and load the Finder. One afternoon, Steve came up with an original way to motivate us to make it faster. Larry Kenyon was the engineer working on the disk driver and file system. Steve came into his cubicle and started to exhort him. “The Macintosh boots too slowly. You’ve got to make it faster!”
Larry started to explain about some of the places where he thought that he could improve things, but Steve wasn’t interested. He continued, “You know, I’ve been thinking about it. How many people are going to be using the Macintosh? A million? No, more than that. In a few years, I bet five million people will be booting up their Macintoshes at least once a day.”
“Well, let’s say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that’s probably dozens of lifetimes. So if you make it boot ten seconds faster, you’ve saved a dozen lives. That’s really worth it, don’t you think?”
We were pretty motivated to make the software go as fast as we could anyway, so I’m not sure if this pitch had much effect, but we thought it was pretty humorous, and we did manage to shave more than ten seconds off the boot time over the next couple of months.
Post a Comment