VMware vCloud Blog: vCloud Basics for IT Admins: Key Features

By: David Davis

In my experience with speaking at conferences and creating video software training courses, one of the primary obstacles I must overcome when discussing VMware’s cloud computing solutions is assisting VMware Admins in understanding what “vCloud “ is. 

Some people think vCloud is a third party public cloud solution. Some people think it creates private clouds. Some people think vSphere is vCloud. Because confusion is common, I thought I would write a short post explaining what vCloud is and key features VMware admins should be aware of when getting started with vCloud. 

First, I want to point out that I don’t work for VMware so these views are my own, and not VMware’s official marketing message. VMware’s official product pages for anything related to vCloud are located here. 

In this blog post I’ll discuss what vCloud is, what the solutions are that make it up, and how it can help you.

vCloud Overview

VMware says that vCloud looks like this:Vcloudbasics1However, let me explain it in my own words. From my perspective, it breaks down into just 2 basics things that you really must understand, like this: 

  • vCloud Director
  • vCloud Public Clouds

Before I explain these two, let me first cover some common misunderstandings about vCloud…

  • vCloud is not vSphere. vSphere is the underlying hypervisor that runs on physical hardware and makes vCloud infrastructure clouds possible, but they aren’t the same. Just because you use vSphere doesn’t mean that you have a vCloud or are using cloud computing.
  • vCloud isn’t a product. You don’t buy or sell “vCloud” by itself. There are a number of VMware products and services, as well as other companies’ services that start with the word vCloud. You may “use vCloud Director to implement a private infrastructure cloud” but you don’t just use “vCloud”.

vCloud is a family of products offered both by VMware and third-party service providers, with the primary two products being vCloud Director and vCloud datacenter services (such as vCloud Powered Services).

vCloud Director

Sold only by VMware and VMware partners, vCloud Director is used to create private clouds (by private companies) or public clouds (by service providers). It sits on top of vSphere and works with vCenter to provide:

  • Virtual datacenters
  • Fast provisioning with link cloning
  • Multi-tenant organizations
  • vShield security for secure multi-tenancy
  • Infrastructure service catalogs

One of the most powerful things about vCloud Director is the vCloud API that it offers, offering the ability to create custom applications or front-ends that interface with it. 

Vcloudbasics2

vCloud Director is available with a 60 day evaluation (that includes vSphere and vCenter).

vCloud Datacenter Services

When service providers use vCloud Director they can become vCloud Datacenter partners or providers of vCloud Powered services. These providers offer public infrastructure clouds, powered by vSphere and running vCloud Director.

There are eight VMware vCloud Datacenter partners worldwide, and over a hundred providers of vCloud Powered services. The directory of all vCloud providers (with the option to evaluate some of them for free) is found at vCloud.VMware.com.

Vcloudbasics3

To sum “vCloud” up, today, it’s vCloud Director used either within your company to create a private cloud or used by a service provider to create a public cloud.

So if you’re a VMware Admin or new to vCloud, I hope that this post has given you a better understanding of vCloud’s key features and how your organization can benefit from them. Learn more about VMware vCloud here.

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere 5 and vCloud Director video training courses. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com.

. Bookmark the

.

Google Fiber: Glasfaseranschluss mit 1 GBit/s für 70 US-Dollar pro Monat

Im Februar 2010 kündigte Google Internetzugänge mit 1 GBit/s für Endkunden an, ab sofort kann der Google Fiber genannte Dienst bestellt werden. Google verlangt in den USA dafür 70 US-Dollar pro Monat.

Google Fiber startet in zwei Städten in den USA: Kansas City in Kansas und Kansas City in Missouri. In Teilen dieser Städte können sich interessierte Nutzer ab sofort für 10 US-Dollar registrieren. Finden sich in einer Nachbarschaft, Fiberhood genannt, ausreichend Interessenten, will Google die Glasfaseranschlüsse einrichten. Rund 10 Prozent der Haushalte müssen dazu mitmachen.

Die Bewohner in den ausgewählten Stadteilen in den beiden Städten haben 45 Tage Zeit, sich für Google Fiber anzumelden, aber schon nach wenigen Stunden gibt es für einige Fiberhoods ausreichend Interessenten.

Googles Angebot ist attraktiv: Ein Glasfaseranschluss mit 1 GBit/s sowohl Up- als auch Downstream und ohne Volumenlimit kostet 70 US-Dollar pro Monat. Der Vertrag läuft dabei mindestens ein Jahr, eine Anschlussgebühr fällt nicht an. Zudem sind Google Drive mit einem Speichervolumen von 1 TByte und eine Anschlussbox im Preis enthalten.

Für 120 US-Dollar pro Monat bei zwei Jahren Vertragslaufzeit gibt es zusätzlich ein Paket mit zahlreichen Fernsehsendern in HD samt einer Set-Top-Box, Googles Tablet Nexus 7 und einem NAS.

Ungewöhnlich ist zudem die dritte Tarifalternative: ein Internetzugang mit 5 MBit/s im Downstream und 1 MBit/s im Upstream für einmalig 300 US-Dollar, ohne Volumenbeschränkung und mit der Garantie, dass der Dienst mindestens sieben Jahre lang zur Verfügung steht. Die 300 US-Dollar können dabei alternativ auch in zwölf Monatsraten von 25 US-Dollar bezahlt werden.

Die ersten Haushalte will Google kurz nach Ende der Bewerbungsfrist in 45 Tagen anschließen. Haushalte in allen Nachbarschaften, die sich für den Zugang durch ausreichende Registrierungen qualifizieren, sollen bis spätestens Ende 2013 einen Anschluss erhalten.

SuperMUC am LRZ: Europas schnellster Computer von innen

Golem.de hat dem mit Heißwasser gekühlten Supercomputer am LRZ in München einen Besuch abgestattet. Kurz zuvor hatte der Rechner im Rahmen eines Festaktes mit Politikern und Vertretern von IBM und Intel offiziell den Betrieb aufgenommen.

"Wir haben noch einen vergessen" scherzt Bundesforschungsministerin Annette Schavan am frühen Morgen des 20. Juli 2012, als sie auf Wunsch der Fotografen bereits zum dritten Mal den roten Knopf drückt. Der Schalter hat zwar keine Funktion, wie sein frei im Raum aufgestelltes Podest ohne Kabel belegt, aber das Foto ist wichtig - anders ist einer breiten Öffentlichkeit kaum zu vermitteln, dass der Supercomputer ab jetzt wirklich funktioniert und Wissenschaftlern als Werkzeug dienen soll.

Der symbolische Akt findet mitten im SuperMUC genannten Rechner selbst statt, der im obersten Stock eines der beiden Rechnergebäude im Forschungscampus in Garching steht. "Novogarchinsk" - in Anlehnung an Nowosibirsk - schimpfen langjährige Mitarbeiter der TU München und anderer dort untergebrachten Institute den abgelegenen Münchner Vorort, weil sie nun statt zu den zentral gelegenen Forschungsstätten im Zentrum der bayerischen Landeshauptstadt oft 30 Minuten mit der U-Bahn ins Umland fahren müssen. Das tut kaum noch ein Forscher oder Student mit dem Fahrrad, wie es früher der klassische Weg zur Uni war.

Seit 2006 befindet sich das Leibniz Rechenzentrum (LRZ) in Garching, und erst der Umzug macht ein Projekt wie SuperMUC überhaupt möglich. Große Rechner brauchen viel Platz.

An diesem Morgen haben sich aber auch zahlreiche Politiker, Uni-Präsidenten, ehemalige Mitarbeiter und eben auch Annette Schavan auf den Weg nach Garching gemacht. Schließlich hat die von Bund und Land mit 83 Millionen Euro errichtete Anlage vor zwei Wochen erst Platz 4 der Top500 Supercomputer-Charts erreicht. Mit einer Rechenleistung von 3 Petaflops, also 3 Billiarden Rechenoperationen pro Sekunde, ist er der schnellste Supercomputer Europas. Oder, wie Schavan sagt: "eine Super-Erkenntnisquelle".

  1. 1
  2. 2
  3. 3
  4. 4
  5.  
Sparsamer durch heißes Wasser 

VMware vFabric Blog: Virtualization for Oracle DBs – Some cut TCO by 50%

One of the biggest IT megatrends of the new millennium has been virtualization.  As of 2011:

While that sounds (and is) impressive, it will draw little more than a yawn from most of many of my DBA and ETL friends.  “Virtualization is great and all”, they say, “but we do real work.  You know, we work with Oracle and with data.” 

And, therein lies the rub.  

Database Virtualization Needs

If there is any part of enterprise that needs to increase flexibility and reduce cost, it is on the data side of IT.  For example, database operations suffer from: 

  1. Expensive licenses (anyone want to shell out for Oracle EE for a 1 GB database?)
  2. Underused servers (the average Oracle server is less than 10% utilized) to
  3. Overwhelmed staff (the average DBA can manage about 40 databases).
  4. Security concerns (do you really want your data living in an anonymous cloud?)

Reducing costs for hardware, software, and staff are problems that virtualization has been solving for more than a decade.  Security is certainly part of our DNA.    So, why hasn’t database virtualization become all the rage?  Even recently, 72% of respondents to Information Week’s 2011 State of Database Technology Survey did not virtualize their primary database.

The primary concern from the database community is around performance.  When database virtualization was first put on the table a decade ago, this might have been a valid concern.  However, ESX 4.0 (released in 2009) largely resolved these issues, and vSphere 5 (released in 2011) can do over 1 million IOPS, or 800x more than the average Oracle database requires.

The only thing standing in the way of the database virtualization tsunami is the tools to make it happen quickly and easily.  This is why VMware’s vFabric Data Director exists.

What is vFabric Data Director (vFDD)?

vFabric Data Director (vFDD) extends vSphere to allow database-aware virtualization for Oracle (and PostgreSQL).  It leverages all the advantages of vSphere to benefit database administrators, developers, analysts, and data scientists.  vFDD is for anyone who creates and uses data in the enterprise.

Data Director helps organizations significantly reduce cost and increase flexibility for Oracle without having to make radical changes to existing systems, processes, or skill sets.  In fact, an Oracle administrator or user shouldn’t ever know or care if a database is virtualized or not.  With vFDD, an organization will use the same Oracle database, operating system, monitoring, and backup/recovery tools they use today.  However, they will be able to remove significant cost and leverage all the advantages of the VMware vSphere platform.

How vFabric Data Director Works

The Data Director architecture is based around the idea of database templates.  A database template is a virtual machine based on a specific layout for the operating system, database, data, and logs. 

Each database conforms to a basic layout:

Data-director-database-virtual-machine-VM-template

A DBA is able to create a parent template, installing specific versions of Oracle, an operating system, and monitoring agents.  A central management server will build child databases from parent templates with inherited binaries and configurations.  Child databases can even be populated directly from Oracle RMAN.

Child databases live in their own virtual machines and the management server will configure for the network, disk, and CPU resources provided through vSphere, all safely behind your firewall.  These databases are a snap to patch or update, all you need to do is patch the parent template and the changes can be pushed down to any children automatically.  Best of all, database operations are done with just a few clicks through a web UI or via JSON through a REST-based API.

This model makes it very easy to control and automate database operations, including:

  • Oracle Database creation, execution, and retirement
  • Physical to Virtual (P2V) Oracle Conversion
  • Oracle and O/S patching and Upgrading
  • Oracle migration across platforms and environments
  • Backup and Recovery
  • Database Cloning
  • High availability
  • Resource allocation

In fact, each of these has been automated to the point where administrators can simply set policies and permissions, then grant access to creators (such as developers, analysts, and data scientists) to provision their own databases through a self-service model.

Lower Costs and Better Value

vFabric Data Director 2.0 allows IT to significantly reduce the cost of running Oracle databases without having to change people, processes, or systems. 

Data Director extends vSphere to understand and optimize Oracle databases.  In addition, every database gains the benefits of automatic virtualization. By default, each Oracle instance can be:

  • Resource constrained
  • Elastically scalable
  • Consolidated and Clustered
  • Secured behind your firewall
  • Highly Available

For example, Data Director allows customers to consolidate their Oracle databases simply by ingesting data into Data Director via RMAN.  Underneath the sheets, vSphere consolidates each database VM on the existing virtual infrastructure. Now, Oracle is able to take advantage of all the clustering, dynamic resource scheduling, and high availability services built into vSphere without making changes to the underlying database.

Database consolidation with Oracle and VMware will typically reduce hardware and licensing costs by more than 50%.  Our customers regularly see hardware consolidation ratios between 4-20x and reduction of licensing costs by up to 4x.

If you want to reduce the cost of running Oracle and take advantage of virtualization, learn more about Data Director 2.0 or download an evaluation copy today. 

MorganGoeller-head-shot_80x80

About the Author: Morgan Goeller is an evangelist at VMware.  In the past, he has worked for Netezza, Daman, America Online, IBM, and more.  Some of his accomplishments include the architecture and design of solutions for IBM AS/400, Linux, Apache, Java, Oracle, Netezza, and Sybase.  His solutions span mission critical systems, distributed systems, data warehouses, ETL, loan processing, statistical models, and system performance monitoring in the financial services, healthcare, media, consulting, software, and other industries.  Morgan graduated with a degree in Mathematics/Scientific Computation from the University of Utah.

Release: VMware vFabric Data Director 2.0

EMC partners with Lenovo

August 3rd, 2012

Lenovo, the largest PC company in China and the world’s second-largest PC vendor, has signed an agreement with EMC to promote the sale of equipment storage and server devices for…

Release: VKernel vOPS Server Enterprise 6.6.2

August 2nd, 2012

On July 31, VKernel Corporation, announced the release of vOPS Server Enterprise 6.6.2, in its new feature set is included a new self-learning analytics that allows virtual administrators to deploy…

Veeam Software results for Q2 2012

August 2nd, 2012

Yesterday, August 1st, Veeam released the results about its growth for Q2 2012.
Veeam, founded in 2006, is a VMware Technology Alliance Partner and member of the VMware Ready Management…

Release: Microsoft Windows Server 2012 and Windows 8

August 1st, 2012

Today Microsoft released to manufacturing a new version of its Server Operating System Windows Server 2012 together with a new release of its client Operating System Windows 8. Windows Server…

Oracle acquires Xsigo

August 1st, 2012

On Monday Oracle announced its plan to buy privately held Xsigo, a company based in California and specialized in virtual networking.
The company, founded eight years ago, backed by Kleiner Perkins,…

Fedora 17 improves support for Open vSwitch

July 30th, 2012

On May 29th, the Fedora Project announced the release of Fedora 17, the latest version of Red Hat sponsored free open source operating system distribution that includes Open vSwitch.
Open…

Release: VMware WSX July Tech Preview

July 27th, 2012

In June we covered an interesting experimental feature included in the VMware Workstation Technology Preview: WSX Server.
Today, the lead developer Christian Hammond, announced through his twitter account a new…

OpenStack has a new member in the project: VMTurbo

July 26th, 2012

Another company is joining Openstack: VMTurbo announced today, 25th July, to have joined the project for an open source cloud operating system.
VMTurbo was founded in 2009 and delivers…

Gartner releases its Evaluation Criteria for Cloud Management Platforms

July 25th, 2012

This week Gartner released an interesting research titled Evaluation Criteria for Cloud Management Platforms.
The document, published from Gartner for Technical Professionals (GTP), is an assessment framework intended for that…

Release: Microsoft App-V 5.0 Beta 2

July 25th, 2012

Yesterday, Director of Product Management at Microsoft, Karri Alexion-Tiernan, announced on the Windows Blog that the newest version of Microsoft Application Virtualization – App-V 5.0 Beta 2 is availabe for…

VMware acquires DynamicOps

July 24th, 2012

On July 2nd, VMware announced the acquisition of DynamicOps, Inc. Headquartered in Burlington, MA,
DynamicOps is a company that works in the emerging market for cloud automation solutions that enable…

VMware announces Q2 2012 earnings

July 24th, 2012

Yesterday VMware announced its financial results for the second quarter of 2012.
Total revenue for the Q2 was $1.12 billion, an increase of 22% compared to Q2 2011 and 7%…

VMware acquires Nicira

July 23rd, 2012

VMware just announced to have signed a definitive agreement for the acquisition of Nicira, Inc., for approximately $1.05 billion in cash plus $210 million of assumed unvested equity awards.
The…

New challanges for Rod Johnson, SpringSource CEO

July 23rd, 2012

On July 3rd, 2012 Rod Johnson, the australian creator of the Spring Framework and co-founder of SpringSource, announced on his twitter account and officially on the Springsource blog that he…

 
Monthly Archive

Sicherheitslücke und Jailbreak bei Amazon Kindle Touch

Der Webbrowser des Amazon Kindle Touch enthält eine schwerwiegende Sicherheitslücke: Besucht man damit eine speziell präparierte Webseite, führt der Kindle beliebige Shell-Befehle mit Root-Rechten aus. Ein Angreifer kann also mit den höchstmöglichen Rechten auf den Linux-Unterbau des eBook-Readers zugreifen und versuchen, die Zugangsdaten zum mit dem Kindle verknüpften Amazon-Konto zu entwenden oder die Rechnung des Kindle-Besitzers durch Bücherkäufe in die Höhe zu treiben.

Nach dem Aufrufen unserer Proof-of-Concept-Seite führt der Kindle Touch umgehend einen Shell-Befehl mit Root-Rechten aus – in diesem Fall einen Neustart ("shutdown -r now")

Der Kindle-Browser befindet sich zwar seit über einem Jahr in der Betaphase, dieser angebliche Teststatus schmälert das Risiko für neugierige Anwender jedoch nicht; die Software ist standardmäßig auf dem Gerät installiert. heise Security hat eine Proof-of-Concept-Webseite entwickelt, durch die wir beliebige Shell-Befehle auf einen Kindle Touch mit der derzeit aktuellen Firmware 5.1.0 schleusen konnten. So sendete der Kindle den Inhalt der Datei /etc/shadow an unseren Webserver. Die Datei enthält den Passwort-Hash des Root-Nutzers, zu dem wir anschließend mit einem Passwort-Knacker auch noch ohne großen Aufwand das bislang geheime Klartext-Passwort ermitteln konnten.

Dieses Sicherheitsproblem wurde bereits vor rund drei Monaten öffentlich dokumentiert, bisher allerdings kaum zur Kenntnis genommen – außer in Jailbreak-Kreisen. Seit kurzem gibt es einen browserbasierten Jailbreak, mit dessen Hilfe man Software auf dem Gerät installieren kann, die nicht von Amazon abgesegnet wurde; etwa ein Sudoku-Spiel.

Andere Kindle-Modelle sind anscheinend nicht betroffen. Gegenüber heise Security erklärte Amazons Sicherheitsabteilung, dass sie bereits an einem Patch arbeitet. Laut Forenberichten wird der Kindle Touch teilweise bereits mit der Firmware-Version 5.1.1 ausgeliefert, in der der Fehler behoben wurde. Eine Möglichkeit, das Gerät nachträglich selbst auf diesen Stand zu bringen, gibt es jedoch noch nicht. (rei)

VMware vSphere Blog: Troubleshooting Storage Performance in vSphere – Storage Queues

Storage Queues what are they and do I need to change them?

We have all had to wait in a line or two in our life, whether it is the dreaded TSA checkpoint line at the airport or the equally dreaded DMV registration line, waiting in line is just a fact of life. This is true in the storage world too; storage I/O’s have plenty of lines that they have to wait in. In this article, we examine the various queues in the virtualized storage stack and discuss the when, how, and why of modifying them. 

Queues are necessary for several reasons but primary they are used to allow for sharing of a resource and to allow for concurrency.  By using queues, vSphere is able to allow for multiple virtual machines to share a single resource.  Queues also allow for applications to have multiple active (“in-flight”) I/O requests on a LUN at the same time, which provides concurrency and improves performance. But there is a tradeoff; if you allow too much concurrency the underlying resource might get saturated. To prevent one virtual machine or one host from saturating the underlying resource, the queues have set sizes/limits that restrict the amount of I/O requests that can be sent at one time.

In a virtualized environment there are several queues. At the top of the stack, there are the various storage queues used inside the Guest OS. This includes the queues created and used by the application itself and the storage device drivers used inside the guest OS. In the virtualization layer inside the vSphere software stack, there are three main queues.   A World queue (a queue per virtual machine), an Adapter queue (a queue per HBA in the host), and a Device/LUN queue (a queue per LUN per Adapter). Finally at the bottom of the storage stack there are queues at the storage device, for instance the front-end storage port has a queue for all incoming I/Os on that port.    

StorageQueuesFigure1

When investigating storage performance problems and bottlenecks you should investigate the queuing at all levels of the storage stack from the application and guest OS to the storage array.  For this article, I’ll only discuss the queues in the vSphere storage stack.  

For most customers, the default queue sizes at each of the three main queues in vSphere are generally fine and do not require any adjustments.  But for customers with a high level of consolidation or very intensive storage workloads in their environment, some of the vSphere queues may need to be adjusted for optimal performance.  Below shows a diagram of the three main queues in vSphere with their typical default queue size.  As you can see, the I/O requests flow into the per virtual machine queue, which then flows into the per HBA queue, and then finally the I/O flows from the adapter queue into the per LUN queue for the LUN the I/O is going to.  From the default sizes you can see that each VM is able to issue 32 concurrent I/O requests,  the adapter queue beneath it is generally quite large and can normally accept all those I/O requests,  but the LUN queue beneath that typically only has a size of 32 itself.  This means that if a LUN is shared by multiple virtual machines the LUN queue might not be large enough to support all the concurrent I/O requests being sent by the virtual machines that are sharing the LUN. 

StorageQueuesFigure2

Why are the virtual machine queues and the LUN queues set to just 32?  The reason for setting the limit was to prevent one virtual machine or vSphere host from stealing all the storage performance by dominating the storage with its own I/O requests, the so-called noisy neighbor problem.  For instance, a single storage array LUN could be shared by multiple vSphere hosts, by limiting each vSphere host to only 32 concurrent I/Os on that LUN, the risk that one vSphere host would saturate the LUN and starve out the other hosts is greatly reduced. 

However, setting hard arbitrary limits was the old school way of doing things.  Today using features like Storage I/O Control (SIOC), vSphere can mitigate that virtual machine and vSphere host noisy neighbor risk through a more elegant and fair mechanism.  Therefore, today if you are noticing that your device queues are constantly bumping up to their maximum limits, it would be recommended to increase the Device/LUN depth and use SIOC to help mitigate any potential noisy neighbor problem. A quick little note, SIOC controls storage workloads by modify the Device/LUN queue depth, but SIOC cannot increase the device queue depth beyond the configured maximum. So you have to bump up the maximum yourself if your workloads need larger queues, and then let SIOC reduce it when needed. 

Why increase the device queue? The reason for increasing the device queue is that a storage array is generally more efficient if it can see multiple I/O requests at one time.  The more I/O’s the storage array knows about the more efficient it is at servicing them. This is because the storage array can rearrange the requested I/O blocks and take advantage of I/O block proximity. For instance, if a virtual machine requests 2 blocks that are very close to each other on the storage spindle, the storage array can retrieve the first block and then quickly collect the 2nd block while the storage head on the spindle was “in the neighborhood”.  If the queue depth was set to 1 and the storage array could only see one I/O request at a time, it couldn’t efficient collect other I/O blocks while the disk head was “in the neighborhood” of them, since the storage array wouldn’t even know what blocks you are going to want next.

You can monitor and check the current queue depths for the various queues and how actively they are being used.  There are instructions on how to do that in the “Checking the queue depth of the storage adapter and the storage device” kb article.   http://kb.vmware.com/kb/1027901  If you constantly notice that your Device/LUN queue is reporting 100% active / full then it might be an indicator that you are bottlenecked on your device queue or on the underlying storage.  

Another interesting queuing kb article, that reinforces the always check and follow your storage vendor’s best practices, is the “Controlling LUN queue depth throttling in VMware ESX/ESXi”. http://kb.vmware.com/kb/1008113  vSphere has a feature to detect queue full warnings from the storage array and respond by reducing the Device/LUN queue so that the number of I/O requests that vSphere is issuing to the storage array is reduced until the storage array can catch up and free space in its queue. This feature is off by default but should be enabled according to your storage vendor’s best practices.    

In Summary, there are lots of queues in the virtualized storage stack and those queues have various different default sizes.  For most environments, you do not need to adjust the queues. However, for I/O intensive workloads that generate a large number of concurrent I/O requests or for heavily consolidated environments, it may be beneficial to adjust them so that the storage array can more efficiently process the incoming I/O requests.  Using SIOC and other queue throttling features can mitigate some of the potential risks of increasing the vSphere queues, but it is always best practice to test and evaluate the changes before implementing them in production and avoid oversizing or unnecessarily modifying the queues if you are not noticing queue full bottlenecks.  

Resources on VMware vSphere Storage Queues:

   VMware vSphere - Scalable Storage Performance white paper
    (Although it is a bit dated, it still has useful information) 
    http://www.vmware.com/files/pdf/scalable_storage_performance.pdf

   VMware KB: Checking the queue depth of the storage adapter and the storage device
    http://kb.vmware.com/kb/1027901

   VMware KB: Changing the Queue Depth for QLogic and Emulex HBAs
    http://kb.vmware.com/kb/1267

   VMware KB: Changing Paravirtualized SCSI Controller Queue Depth
    http://kb.vmware.com/kb/1017423

Previous Troubleshooting Storage Performance posts:
http://blogs.vmware.com/vsphere/2012/05/troubleshooting-storage-performance-in-vsphere-part-1-the-basics-.html
http://blogs.vmware.com/vsphere/2012/06/troubleshooting-storage-performance-in-vsphere-part-2.html
http://blogs.vmware.com/vsphere/2012/06/troubleshooting-storage-performance-in-vsphere-part-3-ssd-performance.html

 

vSphere 5.0 U1a was just released, vDS/SvMotion bug fixed!

Many of you who hit the SvMotion / VDS / HA problem requested the hotpatch that was available for it. Now that Update 1a has been released with a permanent fix how do you go about installing it? This is the recommended procedure:

  1. Backup your vCenter Database
  2. Uninstall the vCenter hot-patch
  3. Install the new version by pointing it to the database

The reason for this is that the hot-patch increased the build number, and this could possibly conflict with later versions.

And for those who have been waiting on it, the vCenter Appliance has also been update to Update 1 and now includes a vPostgress database by default instead of DB2!

VMware Labs presents its latest fling – ThinApp Factory

The ThinApp Factory is a virtual appliance that brings centralized administration and automation to the process of creating virtualized Windows applications with VMware ThinApp technology. ThinApp Factory utilizes vSphere API's to spawn workloads which automatically convert fileshares of application installers into ThinApp application containers. These workloads can be run in parellel to maximize throughput and increase ROI for virtualization projects. Packagers and administrators can now utilize 'Recipes' during this packaging process. Recipes are simply small json files which contain a redistributable blueprint of the customizations and optimizations necessary for packaging complex applications. These recipes can be created and now exchanged freely with other customers via the ThinApp community site.

Key Features


  • Automates packaging of application installers into virtualized Windows applications.
  • Leverages vSphere, vCenter for automation of workloads to efficiently package 1000’s of applications.
  • Provides and utilizes ‘Recipes’ as redistributable blueprints for application packaging.
  • Provides a lightweight web UI with a dashboard for administrators to use for the entire workflow of packaging to distribution.
  • Enables administrators to import and edit existing ThinApp projects and modify package.ini, registry, and file settings through the web UI.
  • Integration with Horizon Application Manager application catalog for automated population of application metadata and deployment with the Horizon ThinApp Agent.

Video – VMware View Storage Accelerator

This video will give you an introduction to the View Storage Accelerator, a caching feature that can reduce storage costs and improve performance in VMware View.

The Storage Accelerator is made up of two components. The first is a per VMDK digest file and the second is a global cache. The per VMDK digest file has a mapping for disk block number to hash value and the global cache has a mapping for hash value to actual data. The global cache is a reserved area of memory on the ESXi hosts.

 It is an in-memory dedupe cache that caches data based on the content hash of a disk block. When the VM issues a read request, we first use the digest file to get the hash value for a block and then consult the global cache to see if the block is in cached. Metadata for the digest file is maintained in memory. If there is hit, we go fetch the data from cache; if there is a miss, we go fetch the data from disk.