Tintri VMstore – VM only storage appliance

Last evening I had a WebEx session with Tintri in which they told me about their “VM only” storage appliance VMstore and I must admit that I’m impressed with what they have to offer. I have not yet had the opportunity to test this appliance, all info in this blog post is from the WebEx session and documentation provided by Tintri.


What is Tintri VMstore?

It’s an easy to install storage box that comes in only one configuration: 8.5TB of usable data. In the box is a mix of SATA disks of flash disks. The storage is offered to your VMware environment as one big NFS datastore. By moving data back and forth from SATA to flash, VMstore will eliminate storage performance bottlenecks.


What’s under the hood?

The idea of the VMstore is that you no longer carve your storage into different volumes, LUNs, raid-configs, etc. You have just one big volume that is presented as one single datastore to your VMware infrastructure. Having just one single datastore and no LUNs with different performance characteristics, eliminates a lot of storage configuration and management.

What VMstore actually is doing is moving your data from slow rotating disks into super fast flash storage. Moving ALL of your data to flash would be very costly, so they use the flash storage as cache, but a rather big cache. Contrary to other vendors, VMstore uses the flash for read and write, not just read.

To make optimal use of the flash cache, all data that is moved into cache is compressed and deduped. Where other storage vendors use 64K blocks of data to move into cache, VMstore uses only 8K blocks, making it possible to more precisely address the data that should be moved to cache. Tintri says their hitting cache for 97% of all IOPS in production environment.

Of course the flash and 16 SATA disks are protected by RAID, which is a RAID6 level, but for your storage workload, you don’t need different RAID levels.



Another technique they are using, which will be announced soon, is auto-alignment. Yes, that is correct; VMstore will automatically align all those VMDK’s that you place on the VMstore. This is a feature I would welcome very much, not even for all the performance gains it would bring to VMstore, but for all those VMs that are still on my to-do list that need re-alignment. Maybe I can ‘test’ a VMstore appliance for a week and storage VMotion all my VMs back and forth between my current storage and the VMstore.


Silver, Gold, Platinum

Since there is just one big volume there is no option to differentiate between Silver, Gold or Platinum performance levels. The only influence you have on the performance of a VM (or single VMDK of a VM) is to pin it to the flash cache. Say a VM with a database running inside, is running for a few days and the most used parts of that VMDK have been moved into flash, you can now pin the VMDK into the flash storage. From now on the data blocks of this VMDK that were in flash, will remain in flash even if in normal use VMstore would start moving those blocks back to the SATA disks. Any extra blocks of this VMDK that are moved from SATA to flash, will also be kept in flash for as long as the VMDK is pinned.


Managing VMstore

The goal was to create storage that would need hardly any management and indeed, all the management you have on the VMstore is decisions on whether to pin or not pin a VM into your flash cache and maybe some day replace a disk.

VMstore has a very intuitive web interface in which you can quickly see how your storage is performing. Again, performance is key here, so the view that shows you how much capacity is left, is telling you about “Performance reserves”.


Seeing latency at VM level

A very powerful tool is seeing the latency at VM or VMDK level. In just a few clicks you can see how your VM is performing. Normally you had to first check at storage level what LUN was having high latency, then find out which VMs are running on it and try to figure out which one is the one with the high latency. No more need for that, just open the VMstore web interface.



VMstore is aiming at enterprise customers, since you need to have a certain workload on your storage before you’re running into performance bottlenecks caused by storage configurations. A small environment with just a few IOPS and looking for a lot of room to store data is not the customer that will benefit from a VMstore.

To give you an idea what Tintri is aiming for: They claim a VMstore can outperform an EMC Clariion with 250 spindles. Right now Tintri is testing the VMstore with 65/35 R/W workloads and claims to be able to hit a 50.000 IOPS.

A VMstore with 8.5TB storage should sell for around $65,000 – $68,000 list price.

Any drawbacks?

After listening to the presentation and discussing some topics, there remain some points that should be improved I think.

  • There is just one controller (dual nic though) for the current box. You can choose for a RJ45 connection or 10Gbit connection, but it is still just one controller that connects the VMstore to your VMware infrastructure. This seems a big point for Enterprise ready storage. The 2nd generation Vmstore, which will be presented at Vmworld, will contain two controllers.
  • Another Enterprise feature that is missing right now and will probably available in the next release is replication. Right now there is no replication at all. Plans for Tintri are to add a-sync replication in the next release.
  • In the current release there is no support for VMware VAAI yet, which means especially when offloading storage workloads from the hypervisor to the storage, you would gain some extra performance. However you won’t use VAAI that often during normal operation and the performance bennefit isn’t that big. In vSphere 5 VAAI for NFS will be introduced and Trinti is planning to include this in their next release.
  • I’m not sure yet on the concept of just one model: 8.5TB. If you run out of space, you need to buy a second 8.5TB box. Think data growth within the company has to be really huge to justify buying 8.5TB at once.
  • And then there of course is the point of real world performance. How will the VMstore handle a lot a random reads and writes? When will workloads be generating cache misses and how will the SATA disks perform in this scenario. We’ll have to wait till we get more real life data from customers.

Overall I very much liked what I saw. Of course I can’t comment on performance at all, but the presentation convinced me that VMstore will lower the cost of implementing and managing your storage, if VM storage is the only storage you need.

The view on latency at VM and VMDK level and the auto-alignment are fantastic. The complete absence of difficult storage management is a big big plus for the VMstore.  I think with the coming new version of the VMstore, it will be a real Enterprise ready device.

See full post at: Tintri VMstore – VM only storage appliance

Simplifying IT support and deployments with converged systems

All IT solutions will experience problems at some point in their life.  Supporting IT solutions is difficult, time-consuming and costly, but also a fact of life – a fact as a systems administrator I am thankful for.  It means, I have a job.  Problem solving skills are absolutely necessary, but all administrators need the expert help of vendors’ support departments when our knowledge runs into something we just don’t know.

Unfortunately, when multiple vendors’ products are coupled together as a solution, support can become nasty as vendors point back and forth at each other while trying to get to a resolution.  The more complex the solution, for instance a SAN, the more difficult to troubleshoot through the multiple layers of software, firmware and hardware, even multiple vendors of the solution.  And, I believe, the hassle has made customers seek a better way.

Finding a better way

In my employer’s case, they chose to standardize with a single vendor long before I joined the staff.   We have stuck with servers and storage hardware from the single vendor, including their certified part upgrades (no third party upgrade components).  We chose to do this to simplify our support and avoid finger-pointing.

The vendor we standardized with was HP, and the reason was that they offered an entire line of products under their umbrella to meet our needs.  By the time I joined the staff in 2006, we were already HP heavy, except where a specific Unix was required by another vendor.   What we wanted as a customer was the quickest and easiest route  to a resolution, with the least resistance and finger-pointing, when a problem came up.  Even beyond the hardware solutions, HP has handled our software support for Microsoft, RedHat and VMware for many years.  We wanted this because the software companies could not finger point at the hardware or vice versa – HP was doing it all.  Sure, it might happen between teams in HP occasionally, but we could easily escalate our case and have a manager bring this to a resolution.  It has worked well for our needs.

Having all this expertise in-house is an advantage that HP is now branding under the name “Converged Systems” or the “Instant-On Enterprise”.  Earlier this week, I attended a webinar for the Blogger Reality Contest where HP unpacked more of its converged solutions strategies.  HP is bringing together all of the pieces spread throughout its portfolio into specialized solutions.  Its not a new concept, in my opinion, but one that some customers have been already using for years on their own.  HP has improved on this by tweaking configurations  to squeeze performance out of configurations and adding software to ease installation and management of the solutions.

Building Upwards – HP VirtualSystem

HP introduced VirtualSystem in June as a modular, easy and quick way to implement virtualization in customer datacenters.  The VirtualSystem solution is a full package of storage and compute resources plus the software tools to quickly and easily deploy a virtual stack in an environment.

For HP VirtualSystem, the key benefits are:

  • Quick built out timeframe
  • Automation through Insight Control suite components
  • Monitoring through the Insight Dynamics suite components
  • Improved virtual machine performance, cost and scale due to purpose built hardware
  • Ability to upgrade to CloudSystem for fully automated IT
  • Single point of contact for support – HP for compute, storage and software, including hypervisor

HP VirtualSystem comes in 3 levels (shown below).  The VS1 is built out using rack-mount, Proliant hardware for both the server hosts and for the storage and features a P4000 series iSCSI storage array.  It is rated to handle up to 750 virtual machines and can scale up to 8 physical hosts.  The VS2 is built out using HP BladeSystem with a P4800 iSCSI storage array (covered in depth last week).  It is rated for up to 2500 virtual machines and can scale up to 24 physical hosts.  The third offering is the VS3 which is built on HP BladeSystem and the 3PAR Utility Storage platform to provide ultimate scale and performance.  VS3 introduces fiber channel storage capability and scales up to 6000 virtual machines with up to 64 hosts.

In terms of choice, VirtualSystem supports all three major hypervisors from VMware, Microsoft and Citrix.  Using my company as an example again, the multi-hypervisor datacenter already exists.  We are utilizing VMware vSphere heavily and then some Citrix XenServer.  When it came to planning upgrades for our aging MetaFrame/XenApp farm, we looked at virtualization.  As we evaluated XenServer, we found it to be “good enough” for running Citrix XenApp on top of it.  XenApp has its own failover and redundancy built into the application layer, so many of the VMware advanced features did not matter.

For VirtualSystem, HP is also handling all support for both the hardware and software for these solutions.  Having experience with HP’s software support teams, I can report that they do a good job at it.  I would not say they are always perfect, but in general, they have solved our issues and advised us well, so in reality this is a big benefit.  For those who want not on break/fix support, HP offers Proactive 24 Services for an additional level of preventative support.

Building to the cloud – HP CloudSystem

As I learned at HP Discover, just because you have a large virtualization pool in your datacenter does not mean you have a private “cloud.”  The critical difference between a virtual infrastructure and a cloud is orchestration and automation.  Built on top of HP VirtualSystem, HP CloudSystem is a solution that offers all of the necessary orchestration, service catalog and workflows to turn virtual infrastructure into a true cloud.  There is a clear and clean upgrade path from VirtualSystem into CloudSystem.  And for those starting fresh or who want to evaluate the HP solution, there is even an HP CloudStart service which will deliver a rack with CloudSystem into their datacenter and have it fully operational in 30 days or less.

CloudSystem is offered in three levels: CloudSystem Matrix, CloudSystem Enterprise and CloudSystem Service Provider.  CloudSystem Matrix is targeted towards those looking to automate the private cloud, customers who are looking to add automation and orchestration to their existing virtual systems.  It provides infrastructure as a service (IaaS) and basic application provisioning in minutes.  CloudSystem Enterprise extends upon Matrix and allows for private and hybrid cloud, enabling the bursting of workloads to public cloud.  It is a platform for hosting not only IaaS, but Platform as a Service (Paas) and Software as a Service (SaaS).  CloudSystem Enterprise provides application and infrastructure lifecycle management and allows for management of traditional IT resources in addition to virtualized resources.   The CloudSystem Service Provider edition extends upon the Enterprise edition and allows for multiple tenants on a single infrastructure, securely without exposing customer data between customers.  It is intended to host public and hosted private clouds for customers.  The editions in CloudSystem are more about capabilities and less about limits, compared to VirtualSystem.

Since automation and orchestration is the key of CloudSystem, that is where I wanted to focus.  The base of CloudSystem is the Matrix Operating System, which is the same combination of HP software found in the HP VirtualSystem solution.  On top of the Matrix Operating System, the CloudSystem Matrix solution includes Cloud Service Automation for Matrix.  This software includes Server Automation for lifecycle management for physical and virtual assets via a single portal and set of processes and HP SiteScope, an agent-less monitoring solution for performance and availability.

The enterprise and service provider editions include a beefed up version called, simply, Cloud Service Automation.  It includes the entire orchestration, database and middleware automation pieces of the pie and a cloud controller software.  These additional pieces allow not only the automatic and streamlined provisioning of physical and virtual servers but also the provisioning of the required glue that sits in between the apps and the servers.  The diagram below from HP shows all the moving parts of Cloud Service Automation better than I can explain in words.  And because, Cloud Service Automation is total lifecycle management, there are the pieces for monitoring and performance management which would be needed.  In addition, the centralized portals serve as point for both end users and IT professionals to manage the cloud.

Cloud Maps are another feature of CloudSystem and these are predefined automation workflows for deploying software and platforms easily.  These are the piece of the puzzle that allows for improved deployment times and also allow for drag and drop creation of new workflows and processes in the cloud.  HP has worked with its software partners to create these maps of requirements and automate the process of deploying their solutions.

Beyond all of the capabilities, HP is working hard to make this an open solution by making it compatible to burst workloads into third party clouds, whether its Amazon’s EC3 or a vCloud service provider.  This was a point stressed during the announcements at HP Discover and during the call on Tuesday.

This is post number two for Thomas Jones’ Blogger Reality Show sponsored by HP and Ivy Worldwide. I ask that readers be as engaged and responsive as possible during this contest.  I would like to see comments and conversations that these entries spark, tweets and retweets if it interests you and I also request that you vote for this entry using the thumbs up/thumbs at the top of this page.  As I said earlier, our readers play a large part in scoring, so participate in my blog and all the others!

This isn’t the first time I’ve written about CloudSystem.  In June,  I posted about my take on CloudSystem Service Provider from a potential service provider’s perspective.  I encourage you to take a look at that post, too, after you take a minute to comment and/or vote on this post.

SAN booting alternatives for data storage managers

What you will learn in this tip: Storage-area network (SAN) booting and server virtualization are fueling the trend toward diskless servers. Learn about the move from internal to external storage and other booting alternatives available for your organization.

Enterprise storage has always implied the use of hard disk drives, but times are changing. The spread of server virtualization and the increase of SAN booting has created a new trend toward diskless servers. Server manufacturers have recognized this shift and are offering USB flash drives and SD card slots. This trend away from disk might seem threatening, but it actually increases the impact of enterprise data storage technologies.

Internal to external and networked storage

Most enterprise application data resides on external storage, whether it’s SAS, a direct-attached storage (DAS) array, an iSCSI or Fibre Channel (FC) SAN, or using network-attached storage (NAS). Networked shared storage (SAN and NAS) has become increasingly valuable in the modern data center because it provides performance, efficiency and flexibility unmatched by DAS or internal hard disk drives.

One holdout in this shift to external and networked storage is the initial loading of the operating system (OS). A PC’s BIOS expects to boot from an internal hard disk drive, so most servers include one or two small drives for boot even if all their data is on a SAN, LUN or NAS filer. Although booting from SAN has long been possible in both Fibre Channel and iSCSI environments, it never gained acceptance from server administrators. They simply felt more comfortable booting from an internal hard disk drive.

Server virtualization has changed this attitude. Because virtual machine (VM) hypervisors require high-bandwidth I/O, most use SAN or NAS for the majority of their storage already. Guest virtual machines running inside use this shared storage as their boot drive exclusively. This means that highly virtualized environments already boot the majority of their machines from SAN or NAS.

Server virtualization and blade servers

As server administrators have gained confidence in both networked storage and heavily utilized server hardware, they have begun to question their use of internal hard disk drives for booting. Using USB flash drives to load the VMware ESX hypervisor has become a major trend among server virtualization specialists, and they’re rewarded with reduced part counts, increased machine density and greater flexibility to move workloads from machine to machine.

Nowhere is the density of computing more pronounced than in the world of blade servers. Although most vendors still specify on-board hard disk drives for their blades, they’ve rapidly moved to more compact 2.5” mechanisms. The blade server market is trending toward “no-personality” approaches, where the physical blade isn’t tied to a specific running OS instance. This makes on-board disks even more of a liability, spurring adoption of SD cards and booting from SAN.

The same trend is happening in the high-performance computing (HPC) space, where booting from SAN is commonplace. In all cases, server architects who prize flexibility and dynamic operations favor eliminating booting from an internal hard disk drive. As they gain comfort with booting from flash media (USB or SD) and SAN, it’s likely this trend will impact standalone servers as well.

Impacting SAN design

Whether booting from SAN or flash, the availability and performance of the storage network becomes critical. With all server I/O traveling over the network, IT architects must redouble their efforts to provide quick and reliable connections from the host bus adapter (HBA) to the array.

Attaining this kind of bulletproof reliability isn’t new to enterprise storage. SAN administrators are committed to the use of high-availability technologies like multipathing software, and Fibre Channel SANs are particularly efficient at delivering high-performance I/O. In most cases, the same equipment and techniques are applicable to server virtualization and blade server environments as to conventional enterprise applications.

But booting from iSCSI and NAS is something new, and the practice can impact SAN design. iSCSI SANs are often engineered for performance, but they’re not always designed for reliability. In my experience, the use of multiple networks and high-end switches is rare in iSCSI SANs, as is the use of advanced security features like mutual CHAP that improves availability. NFS typically runs over the “regular” LAN, with competing workloads and iffy reliability. These networks must be improved if they’re to support OS booting.

Is SAN booting right for you?

SAN booting has everything to do with server deployment strategies and very little to do with the needs or desires of data storage pros. It’s likely this trend will grow and spread as next-generation server architectures are deployed, so SAN designs and storage product selection must take it into account. As always, architect high-performance, low-latency, high-availability storage networks regardless of the protocol used. And don’t be surprised if you start seeing USB drives and SD cards showing up as alternative boot drives.

There’s another related storm on the horizon: Virtual desktop infrastructure (VDI), which by its very definition, is an alternative boot architecture. Most VDI implementations function similarly to the server virtualization scenarios outlined above, relying on a server to supply a virtual disk with both operating system and application data. But VDI tends to be more demanding of storage resources, with “boot storms” looming as employees arrive at work. Preparing for alternative server boot solutions will help make you ready for VDI.

BIO: Stephen Foskett is an independent consultant and author specializing in enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. He can be found online at GestaltIT.com, FoskettS.net and on Twitter at @SFoskett.

via SAN booting alternatives for data storage managers.

MLC Flash Versus SLC Flash

EMC’s recent announcement at EMC World of Project Lightning documents a program to increase the use of flash devices in enterprise storage. The project includes increased use of flash storage in EMC arrays, all-flash storage configurations, and support for Multi-layer Cell (MLC) flash. This last subject–MLC flash and its difference from SLC flash–piqued my curiosity.

Many years ago I studied electrical engineering. I was an awful at it. Analog was never my thing. I much prefer ones and zeroes. But I challenge myself to think about electronics once every blue moon. So I decided to delve into SLC and MLC flash technologies to understand how they differ and why we should care.  The content below summarizes my online research and the little bit I remember from school. If you can add, correct, or update this article I would be happy to have your comments.

What is the Difference Between MLC and SLC Flash?

MLC flash uses many discrete voltage levels to store multiple values, or bits, per cell [1].  Single-layer Cell (SLC) technology uses fewer voltage levels to program a single bit of information to the cell.  MLC technology obviously produces greater density which means it stores more data cheaper.  But the higher density comes with a cost: MLC produces storage that is more sensitive to temperature changes, slightly slower, and more likely to fail than SLC flash.

SLC flash is ten times the endurance for write/erase operations [2].  At an average of 10,000 write/erase cycles an MLC flash cell will die.  SLC flash cells can sustain an average of 100,000 write/erase cycles.  But why do MLC flash cells fail more than SLC?

Why Do MLC Cells Fail More Than SLC Cells?

I am unable to find an answer to this anywhere on the web.  If you see one, I would love to read it.  But in the dark, dusty corners of my memory I remember enough about electronics to hazard a guess at this.  As I see it, there are two reasons why MCL flash should fail more than SLC: one reason is a statistical and the other is electronic.

The statistical argument is that MLC cells are being written two 50% more than SLC cells.  They will simply wear out sooner. An SLC cell is storing the value of zero or one.  When an application writes to the data being held by that cell there is a 50% chance that the cell’s value has changed and it requires reprogramming.  Because MLC flash stores two bits, there is a 75% chance that the new two-bit data differs from the existing value.  This means an MLC is written to 0.75 times for each 0.5 times an SLC cell is written. That’s a 50% increase.

The electronic argument is based on MLC flash programming requiring a wider range of voltages [3].  Higher voltages produce greater amperage.  This exacerbates electromagnetic migration.  And the higher voltage on the transistor’s gate will increase erosion of the polysilicon that separates the gate and the channel.  Both of these will result in circuit failure.

How Is MLC Being Made More Reliable?

Because MLC is so much more cost effective than SLC, industry innovation is improving MLC reliability.  Here are a few techniques I found online [4]:

  • Hardware can level writes, which distributes writes throughout the device to avoid balance cell overuse and avoid hotspots.  This means an entire flash drive will tend to fail at once after a long time.  This as opposed to a small number overworked hotspots failing quickly.
  • Hardware can include DRAM cache which can be used to coalesce writes, which decreases cell write count.
  • Flash devices can be over-provisioned for error detection, correction, and dynamic bad cell replacement.
  • There are also a variety of proprietary techniques from flash manufacturers.

One challenge with flash today is the lack of consistent and objective endurance measurements.  It is difficult for storage vendors to publish availability guarantees when the reliability of the underlying media is uncertain.  This means to support flash devices in its VMAX arrays–which are rated at six nines (99.9999%) availability–EMC has to do a tremendous amount of qualification of the devices. This qualification process should always mean that flash support in enterprise storage should consistently lag its support in consumer devices, where availability requirements are much lower.


No one denies that SSD storage is becoming more common in the enterprise. EMC’s support of MLC devices is only one of the items introduced by Project Lighting that will increase flash presence, producing better performing and more efficient storage. If you are interesting in learning more on the subject, follow the links below to the sources for this article. Also considering Googling “tlc flash” to see the higher density, less reliable Triple-layer Cell (TLC) that will certainly find its way to the enterprise after more innovation.


My information came from documents I found as a result of Google searches. Here are my recommendations for further reading.

  1. http://www.smxrtos.com/articles/mlcslc.htm
  2. http://www2.electronicproducts.com/Choosing_flash_memory-article-toshiba-apr2004-html.aspx
  3. http://www.supertalent.com/datasheets/SLC_vs_MLC%20whitepaper.pdf
  4. http://www.infostor.com/index/articles/display/1169849064/articles/infostor/disk-arrays/disk-drives/2010/july-2010/mlc-vs__slc_flash.html

Calculate IOPS in a storage array | TechRepublic

Calculate IOPS in a storage array

February 12, 2010, 2:36 PM PST

Takeaway: What drives storage performance? Is it the iSCSI/Fiber Channel choice? The answer might surprise you. Scott Lowe provides insight into IOPS.

What drives storage performance? Is it the iSCSI/Fiber Channel choice? The answer might surprise you. Scott Lowe provides insight into IOPS.


When it comes to measuring a storage system’s overall performance, Input/Output Operations Per Second (IOPS) is still the most common metric in use. There are a number of factors that go into calculating the IOPS capability of an individual storage system.

In this article, I provide introductory information that goes into calculations that will help you figure out what your system can do. Specifically, I explain how individual storage components affect overall IOPS capability. I do not go into seriously convoluted mathematical formulas, but I do provide you with practical guidance and some formulas that might help you in your planning. Here are three notes to keep in mind when reading the article:

  • Published IOPS calculations aren’t the end-all be-all of storage characteristics. Vendors often measure IOPS under only the best conditions, so it’s up to you to verify the information and make sure the solution meets the needs of your environment.
  • IOPS calculations vary wildly based on the kind of workload being handled. In general, there are three performance categories related to IOPS: random performance, sequential performance, and a combination of the two, which is measured when you assess random and sequential performance at the same time.
  • The information presented here is intended to be very general and focuses primarily on random workloads.

IOPS calculations

Every disk in your storage system has a maximum theoretical IOPS value that is based on a formula. Disk performance — and IOPS — is based on three key factors:

  • Rotational speed (aka spindle speed). Measured in revolutions per minute (RPM), most disks you’ll consider for enterprise storage rotate at speeds of 7,200, 10,000 or 15,000 RPM with the latter two being the most common. A higher rotational speed is associated with a higher performing disk. This value is not used directly in calculations, but it is highly important. The other three values depend heavily on the rotational speed, so I’ve included it for completeness.
  • Average latency. The time it takes for the sector of the disk being accessed to rotate into position under a read/write head.
  • Average seek time. The time (in ms) it takes for the hard drive’s read/write head to position itself over the track being read or written. There are both read and write seek times; take the average of the two values.

To calculate the IOPS range, use this formula: Average IOPS: Divide 1 by the sum of the average latency in ms and the average seek time in ms (1 / (average latency in ms + average seek time in ms).

Sample drive:

  • Model: Western Digital VelociRaptor 2.5? SATA hard drive
  • Rotational speed: 10,000 RPM
  • Average latency: 3 ms (0.003 seconds)
  • Average seek time: 4.2 (r)/4.7 (w) = 4.45 ms (0.0045 seconds)
  • Calculated IOPS for this disk: 1/(0.003 + 0.0045) = about 133 IOPS

So, this sample drive can support about 133 IOPS. Compare this to the chart below, and you’ll see that the value of 133 falls within the observed real-world performance exhibited by 10K RPM drives.

However, rather than working through a formula for your individual disks, there are a number of resources available that outline average observed IOPS values for a variety of different kinds of disks. For ease of calculation, use these values unless you think your own disks will vary greatly for some reason.

Below I list some of the values I’ve seen and used in my own environment for rough planning purposes. As you can see, the values for each kind of drive don’t radically change from source to source.


Note: The drive type doesn’t enter into the equation at all. Sure, SAS disks will perform better than most SATA disks, but that’s only because SAS disks are generally used for enterprise applications due to their often higher reliability as proven through their mean time between failure (MTBF) values. If a vendor decided to release a 15K RPM SATA disk with low latency and seek time values, it would have a high IOPS value, too.


Multidisk arrays

Enterprises don’t install a single disk at a time, so the above calculations are pretty meaningless unless they can be translated to multidisk sets. Fortunately, it’s easy to translate raw IOPS values from single disk to multiple disk implementations; it’s a simple multiplication operation. For example, if you have ten 15K RPM disks, each with 175 IOPS capability, your disk system has 1,750 IOPS worth of performance capacity. But this is only if you opted for a RAID-0 or just a bunch of disks (JBOD) implementation. In the real world, RAID 0 is rarely used because the loss of a single disk in the array would result in the loss of all data in the array.

Let’s explore what happens when you start looking at other RAID levels.


The IOPS RAID penalty

Perhaps the most important IOPS calculation component to understand lies in the realm of the write penalty associated with a number of RAID configurations. With the exception of RAID 0, which is simply an array of disks strung together to create a larger storage pool, RAID configurations rely on the fact that write operations actually result in multiple writes to the array. This characteristic is why different RAID configurations are suitable for different tasks.

For example, for each random write request, RAID 5 requires many disk operations, which has a significant impact on raw IOPS calculations. For general purposes, accept that RAID 5 writes require 4 IOPS per write operation. RAID 6’s higher protection double fault tolerance is even worse in this regard, resulting in an “IO penalty” of 6 operations; in other words, plan on 6 IOPS for each random write operation. For read operations under RAID 5 and RAID 6, an IOPS is an IOPS; there is no negative performance or IOPS impact with read operations. Also, be aware that RAID 1 imposes a 2 to 1 IO penalty.

The chart below summarizes the read and write RAID penalties for the most common RAID levels.

Parity-based RAID systems also introduce other additional processing that result from the need to calculate parity information. The more parity protection you add to a system, the more processing overhead you incur. As you might expect, the overall imposed penalty is very dependent on the balance between read and write workloads.

A good starting point formula is below. This formula does not use the array IOPS value; it uses a workload IOPS value that you would derive on your own or by using some kind of calculation tool, such as the Exchange Server calculator.

(Total Workload IOPS * Percentage of workload that is read operations) + (Total Workload IOPS * Percentage of workload that is read operations * RAID IO Penalty)

Source: http://www.yellow-bricks.com/2009/12/23/iops/

As an example, let’s assume the following:

  • Total IOPS need: 250 IOPS
  • Read workload: 50%
  • Write workload: 50%
  • RAID level: 6 (IO penalty of 6)

Result: You would need an array that could support 875 IOPS to support a 250 IOPS RAID 6-based workload that is 50% writes.

This could be an unpleasant surprise for some organizations, as it indicates that the number of disks might be more important than the size (i.e., you’d need twelve 7,200 RPM, seven 10K RPM, or five 15K RPM disks to support this IOPS need).


The transport choice

It’s also important to understand what is not included in the raw numbers: the transport choice — iSCSI or Fibre Channel. While the transport choice is an important consideration for many organizations, it doesn’t directly impact the IOPS calculations. (None of the formulas consider the transport being used.)

If you want more proof that the iSCSI/Fibre Channel choice doesn’t necessarily directly impact your IOPS calculations, read this article on NetApp’s site.

The transport choice is an important one, but it’s not the primary choice that many would make it out to be. For larger organizations that have significant transport needs (i.e., between the servers and the storage), Fibre Channel is a good choice, but this choice does not drive the IOPS wagon.



In order to intricately understand your IOPS needs, you need to know a whole lot, including specific disk technicalities, your workload breakdown as a function of read vs. write, and the RAID level you intend to use. Once you implement your solution, you can use tools that are tailor-made to IOPS analysis, such as Iometer, to get specific, real-time performance values. This assumes that you have a solution in place that you can measure.

If you’re still in the planning stages or a deep level of analysis simply isn’t necessary for your needs, the generalities presented in this article will help you figure out your needs.

via Calculate IOPS in a storage array | TechRepublic.

10 tips for managing storage for virtual servers and virtual desktops


Server and desktop virtualization have provided relatively easy ways to consolidate and conserve, allowing a reduction in physical systems. But these technologies have also introduced problems for data storage managers who need to effectively configure their storage resources to meet the needs of a consolidated infrastructure.

Server virtualization typically concentrates the workloads of many servers onto a few shared storage devices, often creating bottlenecks as many virtual machines (VMs) compete for storage resources. With desktop virtualization this concentration becomes even denser as many more desktops are typically running on a single host. As a result, managing storage in a virtual environment is an ongoing challenge that usually requires the combined efforts of desktop, server, virtualization and storage administrators to ensure that virtualized servers and desktops perform well. Here are 10 tips to help you better manage your storage in virtual environments.

Read the full article at searchstorage.com…