Computer evolution: And we continue to wait.

By Renegade on Thursday 1 October 2009 11:29 - Comments (14)
Categories: General, Performance, Views: 3.618

It's all about waiting when it comes to computers.

Let's take a short trip down technology lane and look back at "what was" back then.

For me, it all started with a C64. My dad worked for a shipping and forwarding company, and they made the change from punched or IBM card systems to a new form of computing. The company he worked for made their employees an offer to purchase a Commodore 64 and use it at home. So he brought home the first computer I worked with. :)

C64One of the most notable things was that once you got it to display all of the files on one of the floppies, you needed a certain amount of patience to get it to actually start the program. There was no such thing as switching on your computer and just starting to work. Following (or something similar) was seen more than once in the lives of the C64 users:


code:
1
2
3
4
5
6
7
8
LOAD "$",8
LIST
LOAD "TOM AND JERRY",8,1
<get coffee>
<drink coffee>
<flip over the floppy>
<wait some more>
RUN



And you would be ready to start playing, or perhaps even start working. And this is just one example that I am quite familiar with. Something probably not even expected when Alan Turing described his Turing machine, but somehow managed to start with the ENIAC, follow us even after the transistors were invented and then integrated in to integrated circuits, and has since followed us from the Intel 4004 past the more modern systems like the Intel 80386 that included the FPU, and even nowadays with the new AMD Athlon and Intel Xeon processors.

Now, one would say that we have an immense amount of computational power. And I can't do anything else but agree with you. http://tweakers.net/ext/f/MTgHDxe6keEescIrA5QWCiUm/full.jpgIf you take a closer look at Moore's law and plot everything out you can even see that our computational power made a great leap. And that's absolutely great!

So by now we have so much computational power that we don't have to wait anymore when we want to do something, right? That must have changed since the days of the C64?

That's what you would probably expect, right? It's true, we don't have to insert floppies anymore. Instead we use a different medium with a higher data density. And we can run programs that are much bigger and complex on our machines right now that we ever imagined in the days of the Intel 4004.

But we continue to wait. How many people do you know at the office who start off with switching on their computer and then getting a cup of coffee while we wait for the operating system to load or for the programs they work with to start? I know loads of people. If I take the company I work for in to account, you can see that it's a huge piece of software that you install to your servers. But we still wait quite long times when we try to work with the program, and it's usually not the system that is waiting for some user input.

Fact is that we continue to find new things that we can actually compute. The more computational power we have, the more we try to compute. New technologies like nanocomputers won't change anything there. We will have a short lived feeling that is like "wow, this is really fast!", and then compute more and lose the (feeling of) speed.

In short; The computer continues to evolve, and we continue to wait.

Magnetic disks are dead, long live magnetic disks?

By Renegade on Monday 22 June 2009 16:23 - Comments (5)
Categories: HDD's, Performance, Views: 2.818

UK based startup DataSlide caught my eye today. I got a tip from a colleague to check them out, which is what I did.

They produce something called "HRD" or Hard Rectangular Drives.

As far as I can tell this is still a product that is on paper, but shows quite the potential and is an alternative to the existing storage media such as the regular hard drives and solid state drives.

Basically, they have a sandwich construction consisting of a dual-sided medium and on either side of this medium you will find a parallel 2D array of magnetic heads that can read or writes to up to 64 embedded heads at a time. A piezoelectric actuator moves the other media that contain the heads. These are seperated by a diamond coat that acts as a 'lubricant'.

Take a look at the following image to give you a better idea:

How the HRD works

Big advantage is that each head can read or write an entire sector at once. The density of the heads will then be the decisive factor in achieving throughput. Current theoretical limits are set up at around 160,000 IO/s and a throughput of around 500 MB/s at a power consumption of about 4W.

This looks impressive, and I can imagine that this might mean that our 'beloved' magnetic disks may not be dead quite yet. But untill we can actually hold one of the disks in our hands this is still all theory, be it an interesting theory. Let's see how this works out.

[update]
As RobIII kindly pointed out to me in the comments section of this post, I was a little bit too late in pointing this out. There is already a Dutch post about it here on Tweakers.net. Sorry for doubleposting. :)

Are Solid State Disks the holy grail of performance?

By Renegade on Wednesday 3 June 2009 14:30 - Comments (16)
Category: Performance, Views: 4.068

Steve Duplessie blogged that we are missing the point with SSD's. A short quote states the following:
... It is simply not practical or reasonable to systemically apply higher performing sub-components at all levels in today's world and expect the ultimate issue to be solved. Even if you could afford to do so, the incremental operational burden of manually optimizing the environment would take an army of PhDs in a STATIC environment – and I don't think it's possible in the real world dynamic, always changing, never ending data growth world in which we live.

Therefore, what we need is a way to automatically/dynamically optimize the connection performance/availability of the user/data relationships on what we have today, tomorrow, and the next day - no matter what changes or when. The only way I can think that can happen is to CENTRALIZE the control (for availability and routing - and application prioritize/policy) and the performance. We need to take the intelligence from all the elements and create uber control and cache systems. Central Flash married with real application/infrastructure intelligence - sort of how a contained system operates.
That got me thinking about the SSD market, sometimes also called the EFD market. I agree with Storagezilla that there is no way around it. SSD is hot, and you can find all sorts of vendors who will offer you new systems based on the performance gain that is derived from the exchange of normal disks for SSD.

Don't get me wrong, I think this is an excellent development, and we are not there yet. Depending on the various parameters and optimization you feed the disk/controller you can see pretty big differences in performance, even when you use the same disk, as stated here by the storage anarchist. One example is an improvement that try to keep you from seeing the SSD performance degradation problem that is quite well known in the SSD's that are used in the prosumer market.

But as Steve already said, that's just one part of the equation. I mean, where do you find the real performance? When I look at the company where I work, we literally spend hours trying to find performance bottlenecks in large productive systems. So far usually we tend to see that it is not so much the hardware that creates a bottleneck.

The problem is usually way more up the ladder, and locating a system that gets everything out of it's environment is a thing that is hard to find. One would say that especially in the enterprise market you would need raw performance, and that is partially true. We order new systems almost every day.

But.. At this point most of the companies that I know and have had the pleasure of doing business with are not able to perform extensive performance troubleshooting or even performance optimization.

Why should they? We are creating more optimized links in the chain everywhere. Faster CPU's get released, we have new memory developments and SSD's were introduced. And that's just part of the newer things that are being introduced.

These are usually evolutionary developments, not revolutionary. But what about the application? Developers get to work on the more "modern" systems if you will. They need performance to develop their solution, but this is scoped and molded for these same newer machines. Because of the speed at which we see the evolution, the art of tweaking your total solution to get the maximum out of all components is not really there anymore.

Or in a shorter form; We want to deliver a solution that works, and it should work smoothly. But when something goes wrong performance wise, it's usually cheaper to replace the parts with faster ones, then to try and find what I could do to speed things up.

The end goal is quite clear. It's about the optimal usage of your resources, and SSD's can support you there. You will find a lot of cases where you can make good use of the performance that SSD's provide you, but I am afraid that we will be seeing the same thing here as with all other developments. Instead of squeezing it to the max, someday the performance of SSD's will be a prerequisite and we will start installing faster versions of the disks to get more performance if we need it.