ScaleIO Node – what’s the scoop, and what’s up?

Today is the launch (directed availability - general availability in Q1) of the ScaleIO Node - the ScaleIO software, bundled with a range of of server hardware, and if needed a ToR switch - delivered as an appliance with a single clear support model:  EMC supports it, period.

Hmmm:

  • So - is this thing a hyper-converged appliance?  NO.
  • So - does this compete with VSPEX Blue?  NO.
  • So - does this compete with VxRack?  NO.
What is it?  It’s a storage thing.   Scratch that - it’s a  GREAT storage thing :-)   It’s closest relative isn’t a VxRack, but rather an Isilon node.   Another less close relative is a VSAN-Ready Node. 

What do I mean?  Why have we created the ScaleIO Node - and what’s it used for?  Read on!

First of all - the ScaleIO node is all about the ScaleIO SDS software, so if you want to stop reading right now, I would encourage it.

  1. go to http://www.emc.com/getscaleio   Download the bits.   Install if you want to just do a few nodes, but....
  2. … If you REALLY want to see what it’s capable of, go to http://emccode.github.io  There you’ll find vagrant, ansible, puppet, and other tools to help automate at scale deployment of ScaleIO.
ScaleIO is at it core quite simple.  It’s goal is to be an extremely scaleable, extremely simple (the simplicity is to some degree the key to performance and scaling) that can deployed for a broad heterogenous set of transactional use cases.    Randy Bias did a post comparing (with a ton of performance data) ScaleIO to the use of Ceph (commonly deployed as a transactional SDS stack with Openstack) here.   The point is that ScaleIO is a laser, and does what it does really well.   But remember - DON’T LISTEN TO ME.  DON’T LISTEN TO RANDY.   Download and give it a whirl.

People have taken the freely available and frictionless (I’ll say it again: no time bomb + no feature limits + no capacity limits + we don’t even ask for your email address :-) bits and the infra as code tools and created simple automation packages to deploy into AWS, Azure, vCloud Air and more.  

They have played with it at huge scale (hundreds/thousands of nodes) and massive performance levels for a few hours for a few dollars.   I’d encourage anyone to download, play, learn and share.  

What makes ScaleIO great is:

  • It’s simple.
  • It works.
  • It’s performance (latency, bandwidth, system-wide IOps) is great - it’s a function of the hardware you use of course - but it’s great.
  • It’s transactional.  Object stores are great - but their use cases are more “new”.  Transactional use cases are everything most people use storage for today.
  • It’s disruptive.  It can be used in a ton of cases where people use EMC stuff (and non-EMC stuff) for today.
  • It’s available in a simple, free and frictionless way.
  • It’s super-flexible, and open to a ton of use cases.  You can deploy and use it in a million ways. 
The ScaleIO node, at it’s core is simple - all the things that are solid, and for customers that want it with hardware - it’s a great, simple answer.

While you’re at it, for your vSphere 6.x environment, try downloading VSAN here: http://www.vmware.com/go/vsan.   If you’re focused on vSphere uniquely, VSAN needs to be on your evaluation list.   The VSAN  6.x bits are a huge leap forward from the 1.x bits - and the VSAN roadmap is strong.  Expect more to come on VSAN and ScaleIO - my two cents - customers should evaluate and come to their own conlusions.

I did a blog a little while back that it’s worth checking out here: Is the dress white and gold – or blue and black? SDS + Server, or Node?   This captures the essence of what today’s announcement is about.   It’s captured in this “Software+Hardware” vs. “Software only” crazy illogical circle.

 

NewImage

Interestingly as we do more and more with pure software-only stacks, I’m finding I’m navigating this circle with more customers.   They think they want a pure “software only” solution (starting at the 1 o’clock position), and then the dialog goes in a strange circle that ends with them choosing an software + hardware combo.   I’ve found that as much as I want to - I can’t “short circuit” the dialog - because then they think I care whether it’s a software + hardware combo (if you want more, read the blog post above).  I **don’t** care.

Customers that fancy themselves hyper-scale (hint, odds are good that you aren’t) take longer to go around the circle than those who don’t.   It’s a core operational and economic question.  Operationally: do you have (or do you want to have) a “bare-metal as a service” function?   Economics: can you actually save money by procuring the servers (which by definition are cheaper at first glance - but not as dense, or as built for purpose), particularly when you take on managment/sparing/fault management of said hardware.   

  • For some customers - the answer is “yes”.  
  • For many, the answer is “no”.  
  • For many the answer is “I don’t care - the options software-only give me are worth a trade-off in support/density/….”.

We’ve discovered (as VMware has with “VSAN Ready Nodes”) is that supported/qualified hardware accelerates adoption of SDS stacks.

So - what does a ScaleIO node include?   1) ScaleIO software (specifically v1.32 as of this writing); 2) industry-standard servers; 3) optionally, the top-of-rack switch that we’ve tested with and support.

What does the server look like?   Well - the answer is that there’s a broad set.   Here’s one.

 

NewImage

This is actually a performance oriented node (low storage, high CPU/memory).   So far - SINCE THIS IS A STORAGE THING -  the vast majority of the demand is for the capacity-oriented nodes.   There’s a broad range of configs - which are detailed below.

NewImage

NewImage

The premise here is simple.  

  1. Start with the software-only.   That means you can use it in a ton of flexible ways.
  2. Figure out whether it’s something you dig (easy - since the bits are right there for you, no need to listen to ANYONE - just download and go for it). 
  3. Decide whether you prefer to build your own, or want storage node (bundle of the hardware).

Now - why do I keep reinforcing this as a storage thing?   After all - can you run compute on one of the nodes?   Can you?  Yes.   Should you?  probably not.

For those of you following closely, for a while we have demoed Isilon clusters that run compute workloads (even VMAX3 running general purpose workloads).   We’ve discovered that just because you CAN, doesn’t mean you SHOULD.

Since the ScaleIO Node is completely missing the management and orchestration stack to manage that, update it, and otherwise make it a Hyper-Converged compute thing (including the support model) - it is a storage thing, not a hyper-converged compute thing.  

BTW - get used to this idea.   Expect a OneFS software-only variation (choose your model).  No surprise there - that’s the exact same model as ECS (our Object/HDFS SDS stack).   Each are offered in “software only” and “software + hardware” models - and the software + hardware models will have nothing that stops running compute, but will not have the M&O stacks and engineering to make them a hyper-converged thing vs. a storage thing.   I suspect that other will (if they aren’t already) do this option for choice in packaging.

BTW - if what you need is a hyper-converged compute thing… If that’s what you want - it’s VxRack or VSPEX Blue depending on scale.

Here’s the continuum - from ScaleIO = software only (use however you want) -> Scale IO Node = software + hardware node (just like an Isilon node - which is software packaged with an industry standard server) -> VxRack = Hyper-Converged Rack Scale Infrastructure.

NewImage

What’s going on with VSPEX Blue?   Building momentum and commitment.  

NewImage

My personal view is that you cannot simultaneously design for “start small” and “scale big”.  

  • When it comes to turnkey Hyper-Converged appliances, VSPEX Blue and it’s roadmap is our answer.  It’s simple, it’s turnkey, and it’s performant and feature rich.  A Total focus on vSphere and VSAN are unparalleled when you are focused on simplicity…. And VMware and EMC won’t stop here - we will keep pushing this Hyper-Converged Infrastructure Appliance (HCIA) market forward, faster, and faster, and faster.
  • If you want a rack-scale model that can scale to thousands of nodes and you’re and Enterprise Datacenter (which is typically pretty heterogenous), VxRack (including, but not limited to the EVO SDDC Suite persona - and the higher level curated workflows and ecosystem in the Federation Enterprise Hybrid Cloud stack) is the answer.
  • If you want a rack-scale model that can scale to thousands of nodes and it’s for a pure Cloud Native App use case VxRack with the Photon Platform and Pivotal Cloud Foundry is the answer.
So - with another example today of our model of SDS Data Planes are real, but we will give you the choice of package that fits you best….
 
...I’m insanely curious:
 
a) have you download the ScaleIO and VSAN bits?  What do you think?
b) where are YOU on the “circle of illogical choice?”  Do YOU think it’s illogical?
 
 

 

Horizon View 6.2 -What’s New

Horizon View 6.2 was release one day ago and definitely I didn’t expect to see any updates like this during this time in the year. This is the third update of the VMware’s VDI solution for this year after Horizon 6.1 and Horizon 6.1.1 As a VDI engineer responsible for design and engineer the Horizon View VDI of one of the biggest pharmaceutical companies in the world I am impressed that the platform is getting updated so often with adding so many, great and valuable features and capabilities.

Let’s go through one of the main new features which impressed me.

Full Windows 10 Support

I expected this support but honestly I didn’t believe that we will have it in next couple of months. Windows 10 was released a month (or so) ago and it is already added as a supported OS in Horizon View. Well Done VMware EUC! Windows 10 Support means that you can install Horizon View Client on it, you can use it as a source for your desktop pools and also the user profile migration tool provided by VMware in the previous versions is now officially supported for Win10 profiles.

Hosted Apps and Terminal Sessions over PCoIP or Blast

Now you can use View Composer to compose the creation of RDS Servers either if they will be used for Applications or Sessions provisioning. Nice, isn’t it? Also support for 3D vDGA and GRID vGPU were added similar to the support for virtual desktops.

Cloud Pod Architecture Enhancements

If you are not aware of the Cloud Pod Architecture you can check my previous post for this topic. With Horizon 6.2 we can provide Hosted Applications in the same way as we could provide virtual desktops in the previous release. HTML5 access to applications and virtual desktops could be used also CPA, which wasn’t possible in Horizon 6.1.1.

Virtual SAN 6.1

Once of the best features of VMware VSAN 6.1 is the stretch cluster support. This new enchantment could introduce some changes in your design and provide a great level of availability even if entire site is down.

Access Point Integration

Access Point is an alternative of View Security Servers which could allow access to corporate’s virtual desktop from internet without having a VPN. In my experience I had some cases where putting View Security Servers in the DMZ were a serious concern for the ITSEC teams. Now we have an alternative with putting hardened Linux appliance.

View Administrator Enhancements

This is something I will check and review in a separate article. In my opinion the View Administrator could be much powerful than it is now. This is a step forward by adding some new information and actions – pool cloning for example.

Horizon 6 for Linux Desktop Enhancements

As I wrote a month ago, you can now provide a Linux based virtual desktop. Now the support of OSes is extended and also you can use high graphical applications because of the support of NVIDIA GRID vGPU and vSGA.

As any software release it provide not only new features but also a bunch of improvements and fixed bugs. Here is the complete list with them.

 

The post Horizon View 6.2 -What’s New appeared first on The Virtualist.

My 5 reasons to choose Altaro Backup

Altaro, a small company offering backup solutions targeted towards SMBs and focused primarily on Microsoft Hyper-V Server, just made a step forward and added VMware support.

I had an opportunity to test the beta version of the new VMware backup solution and here are my 5 reasons why I like Altaro solution and business model. I have a long experience with using VMware Data Protection so I pick up those items that I see as real benefits for small installations. I will not only mention technical aspect but also some licencing for those of you who are responsible for keeping the budget.

This post is a tutorial-like flow of information I find interesting to me to focus for the environment I manage, which I captured during the course of playing around with the new version for VMware virtualized environments.

1. Backup technology Reverse Delta

Reverse Delta is Altaro proprietary backup deduplication technology. With Reverse Delta, the latest version of a file is always made available in its entirety and not as a delta file. This means that if you require the latest version of a file, it is possible to access it directly from your backup drive without having to rebuild the file from delta files. The delta files are only used if you want to build a previous version of the file, building one delta file at a time for each version as you travel back in time in the reverse direction (see Fig. 1).

Figure 1 - Altaro Reverse Delta
Figure 1 –  Reverse Delta

Older versions are restored by first restoring the latest one, than applying the previous delta over in order to rebuild the previous version, and then so on, always going one version further back in time.

2. Flexible architecture

Nowadays appliances are more and more used to simplify delivery and operation of applications. And this comes with a cost: dedicated resources (IP, server name, etc.) are needed. For small and medium size businesses often is not so easy to afford.

Altaro architecture solves exactly this situations and VM Backup uses installable components:

  • The main application,
  • Remote management tools,
  • Hyper-V host agent (for Hyper-V),
  • Offsite copy utility (Altaro Backup Server).

The main application contains the code that manages the backup and restores tasks. The install includes also the Altaro Remote Management Console. If you want to manage Altaro VM Backup from a different machine then you must install the Altaro Management Tools. If you want to make use of the off-site backup feature in Altaro Hyper-V Backup then you must install the Altaro Backup Server on a remote machine.

Hence, you can install all components on the same Windows (all versions since 2008 R2 are supported) machine or, depending on your environment complexity, you can separate components. If vCenter Server is using Windows platform, you can install Altaro VM backup on vCenter.

Hardware requirements are very low: 128MB RAM and 1GB Hard Disk space. You need to consider also an additional 75MB for each concurrent backup/restore.

3. Server based licensing

Altaro backup comes in three editions: Standard, Unlimited and a Free Edition. The only difference between the Standard and the Unlimited version is the amount of virtual machines you can back up. Standard edition is limited to 5 virtual machines where the Unlimited edition has no limitation.

The pricing is not calculated per CPU – instead it’s calculated by amount of hosts. If you have 2 servers, each with 2 sockets, you will have to buy 2 licenses. With products licensed based on CPU sockets you would have to buy 4 licenses.

The free version allows you to back up 2 VM’s per host; with some limitations like the restore is not possible on different host, no sandbox restore and no file level restore. But you can backup up for free 10 VMs, hosted on 5 hosts, 2 VMs on each.

4. It works for both Hyper-V and VMware

Yes, you can manage VM backups across all your Hyper-V and/or VMware hosts from a single interface, with the paid version (see Figure 2).

Figure 2 - Altaro VM Backup Dashboard
Figure 2 – Altaro VM Backup Dashboard

And yes, you will utilize the same backup/restore features:

  • Agentless (support for SQL and Exchange requires an agent though)
  • Reverse Delta technology
  • Restore on File System Level (for Windows VMs only)
  • Offsite replication using WAN acceleration to another Altaro backup server (no additional license required)
  • “Sandbox” Restore which allows you to either schedule, or manually run test drills to verify the integrity of your backup data
  • E-Mail based alerting

With Altaro VM Backup you can save Backups to a local drive or UNC share. You can save to single, multiple locations (swapped) or offsite (WAN) Altaro Backup Server. Backup destinations can be:

  • USB External Drives
  • eSATA External Drives
  • USB Flash Drives
  • Fileserver Network Shares using UNC Paths
  • NAS devices (Network Attached Storage) using UNC Paths
  • PC Internal Hard Drives (recommended only for evaluation purposes)
  • RDX Cartridges
  • Offsite

5. Easy to install and use

The installation kit (185MB) can be downloaded after a simple registration process (name and valid e-mail address only). It takes a couple of minutes to receive the mail with link to download.

The installation wizard is very straightforward: run, next, accept license agreement terms, next, next, next, finish.

The initial setup is also very easy if you use a quick setup option (see Figure 3):

Figure 3 - Quick Setup
Figure 3 – Quick Setup

In Step 1 you add the source: Hyper-V host(s), ESXi host(s) or vCenter (see Figure 4):

Figure 4 - Add host
Figure 4 – Add host

Then enter the credentials and test connection (see figure 5):

Figure 5 - Add credentials and test connection
Figure 5 – Add credentials and test connection

In step 2 you add the backup destination (see Figure 6):

Figure 6 - Add backup location
Figure 6 – Add backup location

And finally, in step 3 you add VMs to backup location with drag and drop and perform the initial backup (see Figure 7):

Figure 7 - Perform initial backup
Figure 7 – Perform initial backup

What’s next? Get and enter the license key, create the schedule backup and check the advanced settings.

And when a disaster happens and you need to restore, the backup is there. It is definitely a technical solution to consider if you ask me.

Well, this is it – the flavor important to me, from a short review I’ve conducted. I’m happy to listen to your comments and opinions on backup solutions for ESXi.

The post My 5 reasons to choose Altaro Backup appeared first on The Virtualist.

ownCloud Desktop Client 2.0 is out with Multi Account Support and More

Selecting folders to sync

Selecting folders to sync


Version 2.0.0 of the ownCloud Desktop Sync Client has been released today. It introduces multi-account support, large folder sync confirmation and more features. Your client will automatically update and packages for your Linux distribution are be available as well. Read on for some highlights of this release!

Major new features

The brand new sync client offers a redesigned user interface to allow for the biggest new feature: multi-account support. This allows users to add more than one ownCloud server to their client, each of them assigned separate folders. If you have a private and a work ownCloud server, this is immensely helpful. As with previous releases, for each account you can select multiple remote folders to synchronize with specific local folders and within each, check what sub folders you want to sync locally.

Another important feature allows users to determine the behavior when new large directories appear on server side sync folder. While selective sync allows a user to choose to not sync a folder, when a folder is shared it is first synced before this decision can be made by the user. This could result in a full drive if the folder is big. We have fixed this problem with a new feature: sync threshold sizes.

Configuring max folder size

Configuring max folder size

With version 2 of the sync client the user can set a threshold size on any desktop above which a new folder will not be synced automatically to the desktop – and this is for any folder, not just shared folders. If a user wants to have a new folder above the threshold synced, they just use the selective sync check box and the files and folders will be synced. For example, say the user has set the size to 750MB. If a new folder is added that is larger than 750MB, a notification will pop up. The user will then have to specifically checkmark the folder to have it synced. This threshold size is editable by the user, can be set to 0 or greater, and can be set to different thresholds on each desktop client.

And more

There have been a number of other changes. Some are platform-specific like support for longer path names on Windows and native Finder integration for OS X 10.10 Yosemite. Both platforms now also support not syncing hidden files. The improved progress reporting during sync will benefit users on all platforms, as will the automatic limit setting for download bandwidth throttling. Last but not least, there have been many smaller performance and stability improvements in the client.

You can grab the latest version from owncloud.org/install.

HP introduces new 3PAR StoreServ 8000 series arrays for the midrange

HP 3PAR 8000 SeriesAfter almost a 3 year run, HP is replacing the 3PAR StoreServ 7000 series with all new 3PAR StoreSev 8000 series arrays.  This news comes while HP is celebrating how well its mid-range 3PAR arrays have been selling versus competitors. The new arrays features upgraded hardware including 16-gigabit fibre channel and 12-gigabit SAS connectivity for its drives and will feature the same fifth-generation ASIC that were introduced in the 20000 series arrays earlier this year.  The 8000 series also increases the density of storage possible across the board in the 3PAR arrays, reducing the footprint and increasing the top-end capacities.

In terms of portfolio, HP touts a single architecture, single OS and single management across a wide range of solutions with the HP 3PAR.  With the 8000 series introduction, the difference between 3PAR models comes down to the number of controller nodes and associated ports, the types of drives in the array and the number of ASICs in the controllers.  The 8000 series features a single ASIC per controller node and the 20000 series features 2 ASICs per controller node along with more CPU capacity and more RAM for caching.

Both the 8000 and 20000 series arrays feature the 3PAR Gen5 ASIC, which is the latest generation introduced earlier in 2015.  If history repeats, additional capabilities of the Gen5 ASIC will get unlocked by future software upgrades on these two new series of arrays, but out of the gate, the new platforms are already touting density and performance gains in the new platforms.  HP says that they have increased density by 4x, performance 30 to 40 percent and decreased latency by 40 percent between the 7000 and 8000 series arrays.  HP says the 8000 series can provide up to 1 million IOPS at 0.387 ms latency.

HP also announced a new 20450 all-flash starter kit.  This model scales to a maximum of 4 controller nodes as opposed to 8 controller nodes in the 20800 and 20850 models. The 20000 series are the high-end storage arrays HP introduced earlier this year to replace the 10000 series arrays, and are typically targeted at large enterprise and service providers.

That rounds out the HP 3PAR portfolio with the following models:

  • HP 3PAR StoreServ 8200 is the low-end dual-controller model that scales up to 750TB of raw capacity
  • HP 3PAR StoreServ 8400 scales up to 4 controller nodes and is capable of scaling out to 2.4PB of raw capacity
  • HP 3PAR StoreServ 8440 is the converged flash array that provides similiar high performance to an 8450 array, but with the ability to also have spinning disks.  It scales up to 4 controller nodes and includes an increased amount of cache on the controller pairs, comparable to the cache on node with an all-flash array.
  • HP 3PAR StoreServ 8450 is the all-flash storage array scales up to 4 controller nodes and up to 1.8PB of raw capacity and a usable capacity over 5.5PB.  This is the model HP talks about when it says 1 million IOPS at under 1 ms of latency.
  • HP 3PAR StoreServ 20450, a quad-controller, all-flash configuration with larger scale than the 3PAR 8450
  • HP 3PAR StoreServ 20800, the workhorse array with up to 8 controller nodes and a mix of hard disk and solid state drives.
  • HP 3PAR StoreServ 20850, the all-flash configuration of the 20000 series.

3PAR8000-1-1024x441

HP announced the new 8450 all-flash array is available in 2U starter kit priced at just $19,000 for 6TB of usable storage.  When HP talks about usable storage and the all-flash array, it assumes a 4 to 1 compaction using its thin-provisioning and thin-deduplication – both native, realtime capabilities powered by the ASIC.  The same array can also be configured with up to 280TB of usable capacity in just 2U of space.

All this news comes just in time for VMworld, where HP is going to be showing the new arrays publicly for the first time.  I look forward to checking them out on the show floor and talking with some HP folks to find out more.

 

VMworld 2015 Session: INF5211 – Automating Everything VMware with PowerCLI – Deep Dive

PowerCLI is the number 1 automation product from VMware, in previous years you have seen how to get the most out of your infrastructure using PowerCLI. This year we will take it further than ever including new product automation never seen before, automating your public cloud instances and showing migration and reporting for your cloud workloads. This session will demonstrate how to bring DevOps working practices into your vSphere automation with PowerCLI and Desired State Configuration (DSC).