Magnetic disks are dead, long live magnetic disks?

By Renegade on Monday 22 June 2009 16:23 - Comments (5)
Categories: HDD's, Performance, Views: 2.801

UK based startup DataSlide caught my eye today. I got a tip from a colleague to check them out, which is what I did.

They produce something called "HRD" or Hard Rectangular Drives.

As far as I can tell this is still a product that is on paper, but shows quite the potential and is an alternative to the existing storage media such as the regular hard drives and solid state drives.

Basically, they have a sandwich construction consisting of a dual-sided medium and on either side of this medium you will find a parallel 2D array of magnetic heads that can read or writes to up to 64 embedded heads at a time. A piezoelectric actuator moves the other media that contain the heads. These are seperated by a diamond coat that acts as a 'lubricant'.

Take a look at the following image to give you a better idea:

How the HRD works

Big advantage is that each head can read or write an entire sector at once. The density of the heads will then be the decisive factor in achieving throughput. Current theoretical limits are set up at around 160,000 IO/s and a throughput of around 500 MB/s at a power consumption of about 4W.

This looks impressive, and I can imagine that this might mean that our 'beloved' magnetic disks may not be dead quite yet. But untill we can actually hold one of the disks in our hands this is still all theory, be it an interesting theory. Let's see how this works out.

[update]
As RobIII kindly pointed out to me in the comments section of this post, I was a little bit too late in pointing this out. There is already a Dutch post about it here on Tweakers.net. Sorry for doubleposting. :)

"Want support on Symmetrix? Reboot 500 Windows servers."

By Renegade on Thursday 18 June 2009 10:57 - Comments (13)
Categories: SAN, Storage, Symmetrix, Windows, Views: 6.807

I would dare to say that we have one of the bigger SAN environments here at our company. We have well over 1000 hosts connected to our SAN and use storage from different vendors.

Now, You have your everyday problems when your environment is big. Some problems are smaller, some are bigger, it comes with the territory. But sometimes you just run in to things that will make you think "uhuh, you didn't just say that 8)7 ".

This is the case with a service request that I opened with EMC. I was reading in the EMC Forum when someone made a short mention that the required flags for the Symmetrix front-end ports had changed. I decided to do some checks myself, and found the following document in EMC's Powerlink:
Powerlink ID: emc200609 / "What Symmetrix director flags / bits are required for Microsoft Windows Server 2008?"
Nothing out of the ordinary so far. New settings for a new OS are fine. So just to make sure I also checked for Windows 2003. And I did find something:
Powerlink ID emc201305 / "PowerPath showing loss of connectivity to server down all paths.":
...
The SPC2, SC3, and OS2007 flags are required flags on all Windows 2003 and 2008 servers connected to Symmetrix arrays.
Now that's something new to me. These were not mandatory before. So I opened a case with EMC and asked them to verify this for me and have them confirm that we need these settings to get some form of support from EMC. I received a longer mail back, but the third sentence in the mail stated the following:
Your observation is correct.
If you check the current ESM document, you will find that these settings are mandatory. To top it off, the mail stated that:
Please note, that setting the flags changes the inquiry page and hence the PnP id. You will need to reboot.
This means that we can make the change on the FE-port, or we could set the flags on an initiator base, but each way we would need to reboot about 500 hosts connected to the various Symmetrixes.

We are still talking to EMC to find another solution, but this issue is not an easy one. In all fairness it should be said that this decision is not EMC's fault. Microsoft changed the requirements for Windows 2008 hosts, and just said they want vendors to use the same flags for Windows 2008 and Windows 2003. The result being the issue described above.

I'll update this post or write a new post as soon as we hear anything more, but this is turning out to be an interesting change. Let's see what happens.

Finding the right people for the right size, and how to utilize?

By Renegade on Tuesday 16 June 2009 14:28 - Comments (1)
Categories: Cloud, Storage, Views: 3.288

Storage is big, and it's getting even bigger.

I was talking to a colleague of mine yesterday, and we got caught on the topic of storage and it's current importance. We both agreed that given the past and currently growing topics, storage and storage management is probably one of the areas that will see significant growth in the near future.

An example could be the recent bidding war between EMC and NetApp over Data Domain. DDUP as Data Domain is called on the Nasdaq is a big name in the area of data deduplication. Other hot topics when it comes to storage are "Cloud" and thin or virtual provisioning. And then you have the usual displays of who has got the bigger machines and monolithic, upscaling and wide scaling solutions.

Problem is that it is hard to find people that have an overview of the various solutions. Most vendors will offer a variation of generally accepted technologies, or try to tell you that you don't need it. Finding neutral people who are able to translate requirements or requests between the customer and the vendors or techies are a rare commodity as it seems, and are probably one of the everyday challenges that most customers face.

The material is highly complex as soon as you go past the surface, and neutral info is not that easy to find. You might read some blogs and know some people that are able to give you some information, but asking the right questions is an art of it's own.

What it all boils down to in my opinion is optimal utilization. When you go up a step, you can see that people have always tried to get the most out of the available resources. Be it storage, network, cpu, ram of even technologies such as DSL.

The basic idea is probably that we have a product in place but notice that we are not utilizing it to it's fullest extent. Take the example of DSL, why use only part of the entire spectrum. Why not try to find a use for the free resources?

The same can be said when it comes to the server environment. The introduction of blades showed us a future where we could plug in more blades when we needed them in the environment used. Or perhaps simply pull out the blades at night and use them elsewhere? This didn't entirely kick off as expected for most hardware vendors, but I think it set a direction for products like Scalent, technologies like thin provisioning and most recently cloud.

Cloud seems to be the answer no matter who you ask. But is it? It can be a way to get a better utilization. When implemented correctly I think this is one of the true SaaS solutions. If you combine the cloud trend with storage when you need it, computing power when you need it and even the application you want when you need it, you are talking about something that is truly "on demand". The true power will be determined by the implementation.

And since it all needs to be stored somewhere you are once again at the start of my post. It's going to be big. Quite big I think. Time to get on board? You decide, but I would say yes it is.

Are Solid State Disks the holy grail of performance?

By Renegade on Wednesday 3 June 2009 14:30 - Comments (16)
Category: Performance, Views: 4.053

Steve Duplessie blogged that we are missing the point with SSD's. A short quote states the following:
... It is simply not practical or reasonable to systemically apply higher performing sub-components at all levels in today's world and expect the ultimate issue to be solved. Even if you could afford to do so, the incremental operational burden of manually optimizing the environment would take an army of PhDs in a STATIC environment – and I don't think it's possible in the real world dynamic, always changing, never ending data growth world in which we live.

Therefore, what we need is a way to automatically/dynamically optimize the connection performance/availability of the user/data relationships on what we have today, tomorrow, and the next day - no matter what changes or when. The only way I can think that can happen is to CENTRALIZE the control (for availability and routing - and application prioritize/policy) and the performance. We need to take the intelligence from all the elements and create uber control and cache systems. Central Flash married with real application/infrastructure intelligence - sort of how a contained system operates.
That got me thinking about the SSD market, sometimes also called the EFD market. I agree with Storagezilla that there is no way around it. SSD is hot, and you can find all sorts of vendors who will offer you new systems based on the performance gain that is derived from the exchange of normal disks for SSD.

Don't get me wrong, I think this is an excellent development, and we are not there yet. Depending on the various parameters and optimization you feed the disk/controller you can see pretty big differences in performance, even when you use the same disk, as stated here by the storage anarchist. One example is an improvement that try to keep you from seeing the SSD performance degradation problem that is quite well known in the SSD's that are used in the prosumer market.

But as Steve already said, that's just one part of the equation. I mean, where do you find the real performance? When I look at the company where I work, we literally spend hours trying to find performance bottlenecks in large productive systems. So far usually we tend to see that it is not so much the hardware that creates a bottleneck.

The problem is usually way more up the ladder, and locating a system that gets everything out of it's environment is a thing that is hard to find. One would say that especially in the enterprise market you would need raw performance, and that is partially true. We order new systems almost every day.

But.. At this point most of the companies that I know and have had the pleasure of doing business with are not able to perform extensive performance troubleshooting or even performance optimization.

Why should they? We are creating more optimized links in the chain everywhere. Faster CPU's get released, we have new memory developments and SSD's were introduced. And that's just part of the newer things that are being introduced.

These are usually evolutionary developments, not revolutionary. But what about the application? Developers get to work on the more "modern" systems if you will. They need performance to develop their solution, but this is scoped and molded for these same newer machines. Because of the speed at which we see the evolution, the art of tweaking your total solution to get the maximum out of all components is not really there anymore.

Or in a shorter form; We want to deliver a solution that works, and it should work smoothly. But when something goes wrong performance wise, it's usually cheaper to replace the parts with faster ones, then to try and find what I could do to speed things up.

The end goal is quite clear. It's about the optimal usage of your resources, and SSD's can support you there. You will find a lot of cases where you can make good use of the performance that SSD's provide you, but I am afraid that we will be seeing the same thing here as with all other developments. Instead of squeezing it to the max, someday the performance of SSD's will be a prerequisite and we will start installing faster versions of the disks to get more performance if we need it.

Do we really need to know?

By Renegade on Tuesday 2 June 2009 14:31 - Comments (4)
Category: General, Views: 2.761

Wikipedia says:
Psychological dependency is a dependency of the mind, and leads to psychological withdrawal symptoms (such as cravings, irritability, insomnia, depression, anorexia, etc).
The idea for this post came to me last week as my blackberry didn't want to be charged anymore after the USB Mini-B connector decided to fall out of the side of the cellphone. When does something like that happen? Well, when you try and charge the damn thing, that's Murphy's law for ya. So at that point I was sort of cut-off from the e-mails that I receive on it (both business and private), twitter and several RSS-feeds.

Now, it's not that I didn't have a normal computer that I could have used for any of that, but it sort of became clear to me how often I actually use that thing, and I couldn't help but feel a little cut off from what was going on.

That made me wonder. I wasn't really cut off from anything, but it gives you an understanding of what I see with a lot of my colleagues. We check our e-mails and try to stay informed by reading blogs, news, chatting and a lot more. We get our information the minute we need it, but usually only to an extent that gives us a general overview. We take a look at what interests us, and want to know right here and now when something happens that might affect us.

But do we really need to know? And do we need to know right away?

You can value the fact that we can filter information that we find truly interesting, and that we get the newest information directly from all sorts of sources in this day and age. But do we really need it? Have we developed a psychological dependency? Do we feel deprived, or perhaps even have a craving when we are offline when we did not intend to be? I would say that this is true for a lot of people in the IT, and with things like Android, iPhone and Blackberry this will probably also be the case for more and more non-IT people.