Storage provisioning: Do you really really really need that much?

By Renegade on Wednesday 9 September 2009 09:46 - Comments (6)
Categories: EMC, General, Storage, Views: 3.083

I received a link to an article where we can find an interview with Symantec's Mathew Lodge and their view on data deduplication. I couldn't help but noticing the following quote:
According to a recent survey by Applied Research, more than half of all organizations expect to spend more on storage in 2009 than they did in 2008. But at the same time, the latest Symantec State of the Data Center Report indicates that storage utilisation hovers at just 50%.
Now, that got me thinking on a couple of things. First off, I tried to look up this survey. Unfortunately, the results from the Applied Research-West seem to be beyond my Google skills. On the other hand they seem to be the standard company used by Symantec for surveys that somehow seem to have results that are aligned with Symantec's product portfolio. Talk about a coincidence!

Anyway as they said, "more than half of all organizations expect to spend more on storage in 2009 than they did in 2008" I was pondering how this could be? We are seeing technologies like the deduplication mentioned in the article. Almost all vendors are able to offer something similar. Same can be said about thin or virtual provisioning. Heck, thanks to the effort in the blogosphere and feedback from partners and customers, EMC even decided to change it's policy and make virtual provisioning free for the V-Max, DMX4 and DMX3.

Seems a bit odd that almost all storage vendors are delivering methods to reduce the disk space footprint in their SAN and NAS, but we still see an increase in expenditure. Sure enough the licensing costs for such new features are to be included. And perhaps you even need to buy new hardware to fully utilize such new features. But all of the big vendors are quick enough to tell us the return on invest when we purchase new stuff. So that can't be it, right?

And you know what? They are right!

Simple enough, we don't know how much disk space our users need! Hell, most of the time, the user himself doesn't even know! And then there's the fact that it's too easy to get new storage.

We provision like there's no tomorrow. Not just disk space, but also computational power. You need to test something? Here, have a VM and go right ahead. What? You're on Solaris? No problem, here's a brand new sparkling zone, just for you. How much disk space do you need? Two Tera? No wonder we called it Terabyte, those are monstrous amounts of disk space.

I know the dilemma, and when you ask your users if they really need all of that, you usually get a blank look on their faces, outrage - How dare you ask me that, isn't it obvious? -, or perhaps even an educated guess.Some will even give you forecasts... If you are lucky.

Things will get better with technology like TP and dedupe. And things will get worse when we go for new technologies like cloud., but fact of the matter is, we have made provisioning too easy, and we've somehow lost the art of asking if they really really really need it. Usually the answer to anyone provisioning is a simple "no".

Volgende: Computer evolution: And we continue to wait. 10-'09 Computer evolution: And we continue to wait.
Volgende: SCSI3 PGR:  "Want support on Symmetrix? Reboot 500 Windows servers. Continued.." 08-'09 SCSI3 PGR: "Want support on Symmetrix? Reboot 500 Windows servers. Continued.."

Comments


By Tweakers user himlims_, Wednesday 9 September 2009 10:24

Doesn't mather how much disk-space is available, it always ends up full :Y)

By Tweakers user Renegade, Wednesday 9 September 2009 10:30

It does? Why? Do you create that much new content? Do you need that much temporary space? At home I can see something like that happening, but we usually are way better in handling it. We actually archive stuff, or if we need more space, we delete a couple of movies or stuff that we actually don't use/read/watch anyway. Why are we not able to do that in a corporate environment? :)

[Comment edited on Wednesday 9 September 2009 10:30]



By Tweakers user Pantagruel, Wednesday 9 September 2009 10:39

sorry for the bogus comment, I try to add something but it simply doesn't show (fails to submit, no error message, just a blank page :( )

By Tweakers user PaddoSwam, Wednesday 9 September 2009 12:14

All I can say is 640k...

But really these resources are so cheap these days everybody wastes them, where's optimization these days?

* PaddoSwam is thinking of 130MB printer driver suites which end up installing 1KB of .inf files
* PaddoSwam is thinking of OS's doing not that much more the last 10 years and increasing 10 fold in diskspace
* PaddoSwam stops thinking starts to hurt my brain this early on the day..

Edit: ok not really cheap, but not too expensive either to start thinking about optimization where one should have

This is also more my experience with web development and windows application programming, it's not a general statement.

[Comment edited on Wednesday 9 September 2009 12:24]


By Tweakers user Renegade, Wednesday 9 September 2009 12:19

I disagree. Resources are not cheap, they never are in a corporate environment. If at all, they offer value for money, but they are seldom cheap. It's true that you can find computational power and storage for a fairly good entry price, but that doesn't bring you anything when you combine that with the point mentioned. As long as there is no optimization or better utilization of available resources, [a]and[/] we don't have a person slamming on the brakes and asking if people really need it, we will continue to throw money out of the window and abuse our ever "greening" infrastructure.

Comments are closed