VSAN Health checks disabled after upgrade to vCenter 6.0 U2

Advertise here with BSA


Yesterday at the Dutch VMUG I was talking to my friend @GabVirtualWorld. Gabe mentioned that he had just upgraded his vCenter Server to 6.0 U2 in his VSAN environment, but hadn’t upgraded the hosts yet. Funny enough later someone else mentioned the same scenario and both of them noticed that the VSAN Health Checks were disabled after upgrading vCenter Server. Below a screenshot of the issue Gabe saw in his environment. (Thanks Gabe)

vsan health checks disabled

So does that mean there is no backwards compatibility for the Healthcheck, well yes and no. In this release we made our APIs public, William Lam wrote a couple of great articles on this, and in order to deliver a high quality SDK backwards compatibility had to be broken with this release. So if you received the “health checks disabled” message after upgrading to vCenter Server 6.0 U2, you can simply solve this by also upgrading the hosts to ESXi 6.0 U2. I hope this helps.

** Update March 23rd **

Please note that ESXi 6.0 Update 2 is also a requirement in order to enable the “Performance Service” which was newly introduced in Virtual SAN 6.2. Although the Performance Service capability is exposed in vCenter Server 6.0 Update 2, without ESXi 6.0 U2 you will not be able to enable it. When trying to enable it on any version of ESXi lower than 6.0 U2 the following error will be thrown:

Task Details:

Status: General Virtual SAN error.
Start Time: Mar 23, 2016 10:55:35 AM
Completed Time: Mar 23, 2016 10:55:38 AM
State: Error

Error Stack: The performance service on host is not accessible. The host may be unreachable, or the host version may not be supported

This is what the error looks like in the UI:

"VSAN Health checks disabled after upgrade to vCenter 6.0 U2" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

VIB requires VSAN 6.0.0-2.34 but the requirement cannot be satisfied within the ImageProfile

Today I tried to upgrade my ESXi hosts to ESXi 6.0 Update 2. Since it is just a home lab booted from USB and I don’t use update manager, the easiest way for me is to update using the downloaded ZIP bundle. In my SSH session I ran:

esxcli software vib update --depot=/vmfs/volumes/089a9186-25ef0236/iso/update-from-esxi6.0-6.0_update02.zip

But I now received the following error:

[DependencyError]
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan >= 6.0.0-2.34, but the requirement cannot be satisfied within the ImageProfile.
 VIB VMware_bootbank_esx-base_6.0.0-2.34.3620759 requires vsan << 6.0.0-2.35, but the requirement cannot be satisfied within the ImageProfile.
 Please refer to the log file for more details.

Luckily, the solution was right before my eyes in the release notes (which I should have read BEFORE upgrading):

New Issue Attempts to upgrade from ESXi 6.x to 6.0 Update 2 with the “esxcli software vib update” command fail
Attempts to upgrade from ESXi 6.x to 6.0 Update 2 with the “esxcli software vib update” fails with error messages similar to the following:

[DependencyError]
 VIB VMware_bootbank_esx-base_6.0.0-2.34.xxxxxxx requires vsan << 6.0.0-2.35, but the requirement cannot be satisfied within the ImageProfile.
 VIB VMware_bootbank_esx-base_6.0.0-2.34.xxxxxxx requires vsan >= 6.0.0-2.34, but the requirement cannot be satisfied within the ImageProfile.

The issue occurs due to introduction of a new Virtual SAN VIB which is interdependent with the esx-base VIB and the esxcli software vib update command only updates the VIBs already installed on the system.

Workaround: To resolve this issue, run the “esxcli software profile update” as shown in the following example:

esxcli software profile update -d /vmfs/volumes/datastore1/update-from-esxi6.0-6.0_update02.zip -p ESXi-6.0.0-20160302001-standard

 

 

Maintenance Mode Improvements in vSphere 6.0 Update 2

vSphere 6.0 Update 2 has launched and with it comes a very simple change with the way that VMs and Templates are evacuated from hosts that enter Maintenance Mode. In all prior versions, when a host enters Maintenance Mode, DRS will evacuate the host by placing all the running VMs, powered off VMs, and the templates on other hosts within the cluster. However, under certain conditions the order of operations produces very different results. For math geeks, (4+2)2 ≠ 4+2×2.

Starting with vSphere 6.0 Update 2, when a host enters Maintenance Mode the host will evacuate all the powered off VMs and templates first, and then proceed to vMotion all the powered on VMs. This is probably not something that most will consider a big deal, but for some customers this small change will have a bigimpact.

Picture a situation where external services are provisioning VMs that use templates and those templates are stored on the host entering Maintenance Mode. While entering Maintenance Mode no operations can be performed against the host, including clone operations. In environments with dense consolidation ratios, the time it takes to evacuate the powered on VMs can be several minutes. That’s several minutes before a powered off VM or template is moved to a host where it can be cloned. If the provisioning engine were to attempt to clone from a template on ahost entering Maintenance Mode, the clone operation would fail. This becomes particularly apparent in vCloud Director and vRealize Automation environments. Failure to provision new instances impacts the reliability of the service being provided.

There have already been several improvements vSphere 6.0 in regard to reducing host evacuation times, especially for powered off VMs. Now that they are done first, the opportunity for this type of error is reduced significantly, which results in a higher quality of service provided to your consumers.

The post Maintenance Mode Improvements in vSphere 6.0 Update 2 appeared first on VMware vSphere Blog.

New release: PowerCLI 6.3 R1–Download it today!

It is my pleasure to inform you that vSphere PowerCLI 6.3 Release 1 has now been released and as usual we have some great features to ensure you are able to automate even more features and in this release, faster than ever! As always we take feature requests directly from customers, through feedback at conferences, by looking at the communities and multiple other ways. please do keep giving us your feedback to enable us to keep making the product easier and making automation tasks less painful.

PowerCLI 6.3 R1 introduces the following new features and improvements:

 

Get-VM is now faster than ever!

The Get-VM Cmdlet has been optimized and refactored to ensure maximum speed when returning larger numbers of virtual machine information. This was a request which we heard time and time again, when you start working in larger environments with thousands of VMs the most used cmdlet is Get-VM so making this faster means this will increase the speed of reporting and automation for all scripts using Get-VM. Stay tuned for a future post where we will be showing some figures from our test environment but believe me, its fast!

 

New-ContentLibrary access

New in this release we have introduced a new cmdlet for working with Content Library items, the Get-ContentLibraryItem cmdlet will list all content library items from all content libraries available to the connection. This will give you details and set you up for deploying in our next new feature….

The New-VM Cmdlet has been updated to allow for the deployment of items located in a Content Library. Use the new –ContentLibrary parameter with a content library item to deploy these from local and subscribed library items, a quick sample of this can be seen below:

$CLItem = Get-ContentLibraryItem TTYLinux
New-VM -Name “NewCLItem” -ContentLibraryItem $CLItem -Datastore datastore1 -VMHost 10.160.74.38

Or even simpler….

Get-ContentLibraryItem -Name TTYLinux | New-VM -Datastore datastore1 -VMHost 10.160.74.38

 

ESXCLI is now easier to use

Another great feature which has been added has again come from our community and users who have told us what is hard about our current version, the Get-Esxcli cmdlet has now been updated with a –V2 parameter which supports specifying method arguments by name.

The original Get-ESXCLI cmdlet (without -v2) passes arguments by position and can cause scripts to not work when working with multiple ESXi versions or using scripts written against specific ESXi versions.

A simple example of using the previous version is as follows:

$esxcli = Get-ESXCLI -VMHost (Get-VMhost | Select -first 1)

$esxcli.network.diag.ping(2,$null,$null,“10.0.0.8”,$null,$null,$null,$null,$null,$null,$null,$null,$null)

Notice all the $nulls ? Now check out the V2 version:

$esxcli2 = Get-ESXCLI -VMHost (Get-VMhost | Select -first 1) -V2

$arguments = $esxcli2.network.diag.ping.CreateArgs()

$arguments.count = 2

$arguments.host = “10.0.0.8”

$esxcli2.network.diag.ping.Invoke($arguments)

 

Get-View, better than ever

For the more advanced users out there, those who constantly use the Get-View Cmdlet you will be pleased to know that a small but handy change has been made to the cmldet to enable it to auto-complete all available view objects in the Get-View –ViewType parameter, this will ease in the use of this cmdlet and enable even faster creation of scripts using this cmdlet.

Updated Support

As well as the great enhancements to the product listed above we have also updated the product to make sure it has now been fully tested and works with Windows 10 and PowerShell v5, this enables the latest versions and features of PowerShell to be used with PowerCLI.

PowerCLI has also been updated to now support vCloud Director 8.0 and vRealize Operations Manager 6.2 ensuring you can also work with the latest VMware products.

 

More Information and Download

For more information on changes made in vSphere PowerCLI 6.3 Release 1, including improvements, security enhancements, and deprecated features, see the vSphere PowerCLI Change Log. For more information on specific product features, see the VMware vSphere PowerCLI 6.3 Release 1 User’s Guide. For more information on specific cmdlets, see the VMware vSphere PowerCLI 6.3 Release 1 Cmdlet Reference.

You can find the PowerCLI 6.3 Release 1 download HERE. Get it today!

The post New release: PowerCLI 6.3 R1–Download it today! appeared first on VMware PowerCLI Blog.

Component Metadata Health – Locating Problematic Disk

I’ve noticed a couple of customers experiencing a Component Metadata Health failure on the VSAN health check recently. This is typically what it looks like:

component-metadata-healthThe first thing to note is that the KB associated with this health check states the following:

Note: This health check test can fail intermittently if the destaging process is slow, most likely because VSAN needs to do physical block allocations on the storage devices. To work around this issue, run the health check once more after the period of high activity (multiple virtual machine deployments, etc) is complete. If the health check continues to fail the warning is valid. If the health check passes, the warning can be ignored.

With that in mind, let’s continue to figure out which disk has the potentially problematic component. The warning above reports a component UUID, but customers are having difficulty matching this UUID to a physical device. In other words, on which physical disk does this component reside? The only way to locate this currently is through the RVC, Ruby vSphere Console. The following is an example on how you can locate the physical device on which a component of an object resides.

First, using vsan.cmmds_find, search on the component UUID as reported in the health check (components with errors) to get the disk UUID. Some of the preceding columns have been removed for readability, and the command is run against the cluster object (represented by 0):

> vsan.cmmds_find 0 -u dc3ae056-0c5d-1568-8299-a0369f56ddc0
---+---------+-----------------------------------------------------------+
   | Health  | Content                                                   |
---+---------+-----------------------------------------------------------+
   | Healthy | {"diskUuid"=>"52e5ec68-00f5-04d6-a776-f28238309453",      |
   |         |  "compositeUuid"=>"92559d56-1240-e692-08f3-a0369f56ddc0", 
   |         |  "capacityUsed"=>167772160,                               |
   |         |  "physCapacityUsed"=>167772160,                           | 
   |         |  "dedupUniquenessMetric"=>0,                              |
   |         |  "formatVersion"=>1}                                      |
---+---------+-----------------------------------------------------------+
/localhost/Cork-Datacenter/computers>

Now that you have the diskUuid, you can use that in the next command. Once more, some of the preceding columns in the output have been removed for readbility:

> vsan.cmmds_find 0 -t DISK -u 52e5ec68-00f5-04d6-a776-f28238309453
---+---------+-------------------------------------------------------+
   | Health  | Content                                               |
---+---------+-------------------------------------------------------+
   | Healthy | {"capacity"=>145303273472,                            |
   |         |  "iops"=>100,                                         |
   |         |  "iopsWritePenalty"=>10000000,                        |
   |         |  "throughput"=>200000000,                             |
   |         |  "throughputWritePenalty"=>0,                         |
   |         |  "latency"=>3400000,                                  |
   |         |  "latencyDeviation"=>0,                               |
   |         |  "reliabilityBase"=>10,                               |
   |         |  "reliabilityExponent"=>15,                           |
   |         |  "mtbf"=>1600000,                                     |
   |         |  "l2CacheCapacity"=>0,                                |  
   |         |  "l1CacheCapacity"=>16777216,                         |
   |         |  "isSsd"=>0,                                          |   
   |         |  "ssdUuid"=>"52bbb266-3a4e-f93a-9a2c-9a91c066a31e",   |
   |         |  "volumeName"=>"NA",                                  |
   |         |  "formatVersion"=>"3",                                |
   |         |  "devName"=>"naa.600508b1001c5c0b1ac1fac2ff96c2b2:2", | 
   |         |  "ssdCapacity"=>0,                                    |
   |         |  "rdtMuxGroup"=>80011761497760,                       |
   |         |  "isAllFlash"=>0,                                     |
   |         |  "maxComponents"=>47661,                              |
   |         |  "logicalCapacity"=>0,                                |
   |         |  "physDiskCapacity"=>0,                               |
   |         |  "dedupScope"=>0}                                     |
---+---------+-------------------------------------------------------+
>

In the devName field above, you now have the NAA id (the SCSI id) of the disk.

I’ve requested that this information get added to the KB article.

The post Component Metadata Health – Locating Problematic Disk appeared first on CormacHogan.com.

VMworld 2016 Call for Papers now open through April 12th

It’s that time of year again (actually a few weeks earlier this year), time to submit your best session ideas for VMworld for that oh so slight chance that it might get accepted. Believe me there is a chance, I was presently surprised and shocked when I had one of mine accepted last year. They … Continue reading » The post VMworld 2016 Call for Papers now open through April 12th appeared first on Welcome to vSphere-land!.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Make the Most of Your VMware Support Experience

By Amy Chalifoux

VMware TAMs want to ensure their customers maximize their return on investment with VMware, so they strive to be in tight alignment with our Global Support Services (GSS) group. The short list below is aimed at helping customers have better support experiences. The slide deck linked at the bottom of this post will provide you with additional insight into the organization, support process, and how to make the most of your support experiences with VMware.

5 Essential Support Habits for Customers

  1. Always choose the correct severity level for your issue. If everything is a SEV 1, then nothing is a SEV 1! Choosing the proper severity helps us serve all of our customers fairly and keeps resolution times low. Review information about the severity levels online or in the slide deck below.
  2. Always open your own tickets and be sure to choose the correct category and product for your issue. Phone (877-4-VMWARE, Option 4, Option 2), online, and/or the My VMware app are all acceptable ways to file a ticket. The benefit of doing so is threefold:
    • Your issue is logged in real time
    • It includes all the pertinent details
    • By appropriately categorizing the issue, you ensure your ticket is automatically routed to the proper group
  3. Be proactive and clear in your communications to Support, especially in regards to contact names, methods of communication, and other information about how to best work the case. For example, if you know your case will be transferred internally within your company and worked on by a new case owner, the best thing to do is clearly and proactively state the new name, contact information, and time of the transfer in the case notes. If the SR will be going back and forth between two people in different time zones, or there are other similar circumstances, please clearly state those circumstances in the ticket.
  4. Don’t hesitate to influence the next steps of a ticket if you feel it is needed. Always remember you are your own best advocate. For example, if you have a ticket that has been mainly worked on via email and you want a WebEx or a phone call, communicate this to Support. Clearly state what you would like the next step to be, and supply a few good dates and times when the requested next steps can take place. The TSEs will do their best to accommodate your schedule.
  5. Make use of the information at the bottom of the case emails. Most emails from VMware about your case will have a section at the bottom stating the contact information for the Support manager overseeing your ticket. If you have any concerns about how your case is progressing, contacting that manager will be your quickest route to assistance.

All of this sounds simple, but in researching various reported pain points, we have seen a large number of problems that could be avoided by taking these simple steps. Start using these essentials today to increase your overall support experience, and reduce your time to resolution. Download the full slide deck for even more information about VMware Support.


Amy Chalifoux is a VMware Senior Technical Account Manager based in Colorado.

The post Make the Most of Your VMware Support Experience appeared first on VMware TAM Blog.

VSAN Health checks disabled after upgrade to vCenter 6.0 U2

Advertise here with BSA


Yesterday at the Dutch VMUG I was talking to my friend @GabVirtualWorld. Gabe mentioned that he had just upgraded his vCenter Server to 6.0 U2 in his VSAN environment, but hadn’t upgraded the hosts yet. Funny enough later someone else mentioned the same scenario and both of them noticed that the VSAN Health Checks were disabled after upgrading vCenter Server. Below a screenshot of the issue Gabe saw in his environment. (Thanks Gabe)

vsan health checks disabled

So does that mean there is no backwards compatibility for the Healthcheck, well yes and no. In this release we made our APIs public, William Lam wrote a couple of great articles on this, and in order to deliver a high quality SDK backwards compatibility had to be broken with this release. So if you received the “health checks disabled” message after upgrading to vCenter Server 6.0 U2, you can simply solve this by also upgrading the hosts to ESXi 6.0 U2. I hope this helps.

"VSAN Health checks disabled after upgrade to vCenter 6.0 U2" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

vBeers – Vienna, Austria (Thursday, 21st April, 2016)

vBeers ist eine coole Sache und dieses mal nach der VMUG Anmeldung zur VMUG hier: https://www.vmug.com/p/cm/ld/fid=13425diefabrik_co_at

Eine gute Möglichkeit andere Virtualisierungs Geeks zu treffen und um Erfahrungen auszutauschen. Jeder ist willkommen, der offen ist und gerne seinen Erfahrungsschatz teilt.

Ort: Gaststätte "zur Fabrik"
Gaudenzdorfer Gürtel 73, 1120 Wien
Datum: Donnerstag, 21.4.2016
Zeit: 18:30 Uhr

Zur Info: Normalerweise zahlt jeder seine Getränke selbst, dieses Mal haben wir einen Sponsor dabei!

vBeers is a great opportunity to meet and network with virtualization enthusiasts, to share you experience and to learn from each other. This time after the Austrian VMUG session, registration here: https://www.vmug.com/p/cm/ld/fid=13425. Everybody is welcome, who is open and would like to share experience.

Location: Gaststätte "zur Fabrik"
Gaudenzdorfer Gürtel 73, 1120 Wien
Date: Donnerstag, 21.4.2016
Time: 6:30 pm

The post vBeers – Vienna, Austria (Thursday, 21st April, 2016) appeared first on vBeers - where vGeeks come to meet.