From NAS Virtualization to NAS Feature
MCA Article Feed.
Per my previous post, I wanted to provide more concrete examples from the storage world related to the sedimentary hypothesis.
Here goes example number one: NAS virtualization.
You may recall past companies and products in this space. Those that come immediately to mind include Rainfinity, Acopia, and StorageX, with only Acopia ARX really existing at F5 as a standalone NAS virtualization product. All the others have either been acquired or have gone out of business (at least as far as I know). As there are no longer being highlighted via a standalone application or appliance begs the question: Is NAS virtualization a viable technology?
You bet, and you can see it in action within two Hitachi products, except not as separate appliances: Notably, you’ll find NAS virtualization in the Hitachi Data Ingestor (HDI) and the Hitachi NAS Platform (HNAS).
Our first incarnation was done in 2007/2008 by applying engineering talent from HDS to the then standalone BlueArc. (Here’s a shout out to Simon, Paul, and Phil…welcome back!) It showed up as a feature called eXternal Volume Link (XVL) and was controlled through a basic interface on the native element manager or through full content and indexing via Hitachi Data Discovery Suite (HDDS). XVL can talk to any NFSv3 server as well as using REST over HTTP to talk to Hitachi Content Platform (HCP). So what we did was to put NAS Virtualization as a feature into the storage infrastructure four years ago.
The second incarnation is within HDI and was first implemented as a connection to HCP using REST over HTTP. It is and was designed as a cloud on-ramp for remote locations to connect to stellar Hitachi Private cloud/object storage infrastructure. Most recently with the updated version of HDI we are now able to also virtualize via the CIFS protocol to consolidate existing NAS and Windows Filers into a Hitachi Private Cloud infrastructure. The setup of HDI for this purpose, just like XVL, is as an inline file system virtualizer which can take over shares from the target filers or file servers and allow users to smartly drain these older systems into the cloud.
In both instances you can see that in-band/inline NAS or file system virtualization is no longer a standalone product like F5 ARX or any of the other legacy technologies. In fact the NAS virtualization feature has transformed from a standalone application or appliance to features in the storage infrastructure. Digging a little deeper, two more key questions are: Why did we do this and why in this way?
Well to answer the first one, our customers asked us to. Here is a customer quote from 2006/2007. (Now, I will add that at the time this customer was the “poster child” for Acopia and since there is no statute of limitations on protecting customer names, I’ve removed the customer name from the quote.)
“Acopia was our only choice at the time, but if it was incorporated into a NAS product we’d throw out their [ARX] product in a second.”
Wow! This is still, and was back then, a very clear driver to do what we did. As to why we implemented XVL and HDI file system and NAS virtualization the way we did, that is pretty simple. When we looked at our existing portfolio we already had what was becoming a blockbuster success in the form of in-band block storage virtualization in the form of the original USP. This system had the data movement engine within the storage controller sporting a basic control point on the native element manager and an advanced control mechanism in an out-of-band controller called Tiered Storage Manager at the time. As a result we made the determination that to help our customers as they wanted to add NAS to their portfolio, we’d follow a similar approach with the hope of making adoption easier.
If this isn’t a data point screaming that the sedimentary hypothesis of technology is true then I don’t know what else is. However, this is only one data point and more are needed, and for that you’ll have to wait until the next post.