Today is the launch (directed availability - general availability in Q1) of the ScaleIO Node - the ScaleIO software, bundled with a range of of server hardware, and if needed a ToR switch - delivered as an appliance with a single clear support model: EMC supports it, period.
- So - is this thing a hyper-converged appliance? NO.
- So - does this compete with VSPEX Blue? NO.
- So - does this compete with VxRack? NO.
What do I mean? Why have we created the ScaleIO Node - and what’s it used for? Read on!
First of all - the ScaleIO node is all about the ScaleIO SDS software, so if you want to stop reading right now, I would encourage it.
- go to http://www.emc.com/getscaleio Download the bits. Install if you want to just do a few nodes, but....
- … If you REALLY want to see what it’s capable of, go to http://emccode.github.io There you’ll find vagrant, ansible, puppet, and other tools to help automate at scale deployment of ScaleIO.
People have taken the freely available and frictionless (I’ll say it again: no time bomb + no feature limits + no capacity limits + we don’t even ask for your email address :-) bits and the infra as code tools and created simple automation packages to deploy into AWS, Azure, vCloud Air and more.
They have played with it at huge scale (hundreds/thousands of nodes) and massive performance levels for a few hours for a few dollars. I’d encourage anyone to download, play, learn and share.
What makes ScaleIO great is:
- It’s simple.
- It works.
- It’s performance (latency, bandwidth, system-wide IOps) is great - it’s a function of the hardware you use of course - but it’s great.
- It’s transactional. Object stores are great - but their use cases are more “new”. Transactional use cases are everything most people use storage for today.
- It’s disruptive. It can be used in a ton of cases where people use EMC stuff (and non-EMC stuff) for today.
- It’s available in a simple, free and frictionless way.
- It’s super-flexible, and open to a ton of use cases. You can deploy and use it in a million ways.
While you’re at it, for your vSphere 6.x environment, try downloading VSAN here: http://www.vmware.com/go/vsan. If you’re focused on vSphere uniquely, VSAN needs to be on your evaluation list. The VSAN 6.x bits are a huge leap forward from the 1.x bits - and the VSAN roadmap is strong. Expect more to come on VSAN and ScaleIO - my two cents - customers should evaluate and come to their own conlusions.
I did a blog a little while back that it’s worth checking out here: Is the dress white and gold – or blue and black? SDS + Server, or Node? This captures the essence of what today’s announcement is about. It’s captured in this “Software+Hardware” vs. “Software only” crazy illogical circle.
Interestingly as we do more and more with pure software-only stacks, I’m finding I’m navigating this circle with more customers. They think they want a pure “software only” solution (starting at the 1 o’clock position), and then the dialog goes in a strange circle that ends with them choosing an software + hardware combo. I’ve found that as much as I want to - I can’t “short circuit” the dialog - because then they think I care whether it’s a software + hardware combo (if you want more, read the blog post above). I **don’t** care.
Customers that fancy themselves hyper-scale (hint, odds are good that you aren’t) take longer to go around the circle than those who don’t. It’s a core operational and economic question. Operationally: do you have (or do you want to have) a “bare-metal as a service” function? Economics: can you actually save money by procuring the servers (which by definition are cheaper at first glance - but not as dense, or as built for purpose), particularly when you take on managment/sparing/fault management of said hardware.
- For some customers - the answer is “yes”.
- For many, the answer is “no”.
- For many the answer is “I don’t care - the options software-only give me are worth a trade-off in support/density/….”.
We’ve discovered (as VMware has with “VSAN Ready Nodes”) is that supported/qualified hardware accelerates adoption of SDS stacks.
So - what does a ScaleIO node include? 1) ScaleIO software (specifically v1.32 as of this writing); 2) industry-standard servers; 3) optionally, the top-of-rack switch that we’ve tested with and support.
What does the server look like? Well - the answer is that there’s a broad set. Here’s one.
This is actually a performance oriented node (low storage, high CPU/memory). So far - SINCE THIS IS A STORAGE THING - the vast majority of the demand is for the capacity-oriented nodes. There’s a broad range of configs - which are detailed below.
The premise here is simple.
- Start with the software-only. That means you can use it in a ton of flexible ways.
- Figure out whether it’s something you dig (easy - since the bits are right there for you, no need to listen to ANYONE - just download and go for it).
- Decide whether you prefer to build your own, or want storage node (bundle of the hardware).
Now - why do I keep reinforcing this as a storage thing? After all - can you run compute on one of the nodes? Can you? Yes. Should you? probably not.
For those of you following closely, for a while we have demoed Isilon clusters that run compute workloads (even VMAX3 running general purpose workloads). We’ve discovered that just because you CAN, doesn’t mean you SHOULD.
Since the ScaleIO Node is completely missing the management and orchestration stack to manage that, update it, and otherwise make it a Hyper-Converged compute thing (including the support model) - it is a storage thing, not a hyper-converged compute thing.
BTW - get used to this idea. Expect a OneFS software-only variation (choose your model). No surprise there - that’s the exact same model as ECS (our Object/HDFS SDS stack). Each are offered in “software only” and “software + hardware” models - and the software + hardware models will have nothing that stops running compute, but will not have the M&O stacks and engineering to make them a hyper-converged thing vs. a storage thing. I suspect that other will (if they aren’t already) do this option for choice in packaging.
BTW - if what you need is a hyper-converged compute thing… If that’s what you want - it’s VxRack or VSPEX Blue depending on scale.
Here’s the continuum - from ScaleIO = software only (use however you want) -> Scale IO Node = software + hardware node (just like an Isilon node - which is software packaged with an industry standard server) -> VxRack = Hyper-Converged Rack Scale Infrastructure.
What’s going on with VSPEX Blue? Building momentum and commitment.
My personal view is that you cannot simultaneously design for “start small” and “scale big”.
- When it comes to turnkey Hyper-Converged appliances, VSPEX Blue and it’s roadmap is our answer. It’s simple, it’s turnkey, and it’s performant and feature rich. A Total focus on vSphere and VSAN are unparalleled when you are focused on simplicity…. And VMware and EMC won’t stop here - we will keep pushing this Hyper-Converged Infrastructure Appliance (HCIA) market forward, faster, and faster, and faster.
- If you want a rack-scale model that can scale to thousands of nodes and you’re and Enterprise Datacenter (which is typically pretty heterogenous), VxRack (including, but not limited to the EVO SDDC Suite persona - and the higher level curated workflows and ecosystem in the Federation Enterprise Hybrid Cloud stack) is the answer.
- If you want a rack-scale model that can scale to thousands of nodes and it’s for a pure Cloud Native App use case VxRack with the Photon Platform and Pivotal Cloud Foundry is the answer.