In fact, it's just the opposite. Tracking down non-trivial bugs becomes more problematic, because their source can be located deep below in one of existing layers. Also spawning software layers makes its overall performance drop dramatically. I remember laughing out loud back in 1997 when I read in PC Magazine that a 200 MHz Intel Pentium CPU with 32 MB RAM is recommended to surf the web comfortably, while I could do just fine on Intel Pentium 133 with 16 MB RAM. Now such computer wouldn't probably even boot any modern operating system, not to mention that I can barely open more than a few tabs with Firefox 13 on a computer with 2-core 1.6 GHz CPU and 4 GB RAM. Insane rush to release software more often and more quickly makes it require more and more efficient hardware. In the long rong run, the company may pay less to the developers (because they will produce new applications faster), but it will pay more for hardware infrastructure. Latest power outages in India, one of the world's largest IT centers, clearly show that "productivity" comes at a cost, and that this cost can be huge. To make things worse, hardware performance cannot be increased infinitely: CPU speed became more or less stale since 2005 (further increase in clock frequency became impossible due to amount of heat it produces) and now the processing power is being increased by adding more and more cores. However, there are many algorithms which cannot be parallelized, so they will run equally fast (or slow) on a one-core as well as on a multi-core CPU. Modern processors provide many mechanisms which are designed to help programs run faster, like L1 and L2 cache, but what benefit do you get from a few megabytes of cache if your application requires a virtual machine which does not event fit into the cache itself, not to mention that it needs to be shared among all running applications?
Many of my fellow programmers say that "it has to be like that". No, it hasn't. A computer which helped Apollo 11 land on the Moon was a 2 MHz 16-bit unit, something that modern programmers cannot even imagine. I remember when Kevlin Henney showed at GeeCON a complete chess software written in 672 bytes of 8-bit assembly code someone from the audience screamed: "But that's impossible!". And yet, you can find a web server running on bare Commodore64 with 1 MHz CPU and 64 KB RAM - hell, there is even Unix clone dedicated for it! Another example is a quadcopter controlled by a 16 MHz ATmega 8-bit CPU with 32 KB Flash RAM and 2 KB SRAM (it's based on modified Harvard architecture). If it was programmed by an average software developer, it would probably require the latest Intel Pentium hardware and of course would be much too heavy to fly, even if it run a single floppy MenuetOS...