Submitted by MCA Admin 1 on 13 April, 2012 - 23:31
MCA Article Feed.
I have put together a couple blog entries reviewing some cost analysis that I did 2-3 years ago around Hadoop and Azure storage/server architectures–specifically how we worked with customers to reduce the costs of these environments (in part) with enterprise-class storage. It goes without saying—but I will anyway—the focus of these economic models and case studies was on the deployment and costs of the storage infrastructure. Some of these new cloud/big data environments do not use RAID overhead or distribute data across hundreds of nodes and disk clusters to perform the work. As I did this work, we took a myopic view of just the storage hardware aspect of these environments. I guess you would expect that from an HDS employee.
Earlier this week I had an interesting call with Ramon Chen of Rainstor, and compared notes on how they reduce DB costs, and therefore storage with their product offering. After our conversation, it was clear to me that big data cost reductions can happen on at least 2 levels: