- vCloud Networking and Security Vision describes the challenges that vCloud Networking and Security addresses, its key concepts, and the customer benefits.
- vCloud Networking and Security Overview explains workload networking and security requirements, describes vCloud Networking and Security components, and explains vCloud Networking and Security purchasing options.
- vCloud Networking and Security Customer Use Cases examines how customers implemented vCloud Networking and Security into their environments.
Auf dem Innradweg geht es vom Engadin durch das Tiroler Inntal bis nach Passau. Eine perfekte Möglichkeit Tirol kulinarisch zu erkunden! Ralf berichtet (mit vielen Bildern ) von seinen Erlebnissen ... Weiterlesen »
Now here are some Apple accessories I'd love to review: Denon has added some new networked AV receivers to its INCOMMAND line, all of which support Apple's AirPlay wireless standard.
With that AirPlay support, owners of any Apple iOS device or Mac running a current or recent version of the appropriate operating system can stream lossless music to the receivers from their devices. The pricing for the receivers begins at a very reasonable US$449 for the AVR-X1000, which includes support for 5.1 surround sound and supplies 80 watts of output per channel. Next up is the AVR-X2000 at $649, providing 7.1 surround support, support for the 4K Ultra HD standard and 95 watts of power per channel.
The AVR-X3000 ($899) supports 7.1 channels and 105 watt per channel (it also supports the 4K Ultra HD standard). But it's the Ferrari of the line -- the AVR-X4000 ($1299) -- that you'll really drool over: 7.2 channels, seven discrete output stages and each channel is rated at 125 watts. Do you need HDMI inputs and outputs? It comes with seven inputs and three outputs. Sound processing includes Audyssey DSX, Dolby Pro Logic IIz and a DTS Neo:X decoder. With the AVR-X4000, your SD video content can be converted to HD, while 1080p video can be upscaled to 4K Ultra HD.Source | Permalink | Email this | Comments
For those of you who have tried to import an OVA directly into vCloud Director have probably noticed that this is not supported and only an OVF file can be uploaded. However, it is possible to upload an OVA directly into vCloud Director, but it does require the use of another tool called the ovftool which is multi-platform command-line utility for OVF/OVA management. This article was motivated by a recent internal discussion and I thought I share this little tidbit in case it was not very well known.
Before jumping into the solution, I would like to provide some background on what an OVF and OVA is. An OVF (Open Virtualization Format) is an open, secure, portable, efficient, and flexible format for the packaging and distribution of one or more virtual machines. An OVF usually contains several files that includes a descriptor file with .ovf extension, virtual disk file(s) with .vmdk extension and manifest file with .mf extension. The OVF format is a DMTF standard and it is also supported on other platforms besides VMware that provides support for the OVF format. Contrast that to an OVA (Open Virtual Appliance) which is just a single tar archive file that contains the contents of an OVF. An OVA can be thought of like a container which compresses the contents of the OVF for ease of distribution. Given this difference, I suspect this might be the reason why only the OVF format is only supported in vCloud Director. Though I can see why it maybe confusing when vSphere supports both the OVF and OVA format.
Note: vCloud Director supports both 1.0 and 1.1 of the OVF format.
Going back to our initial problem, you can use the ovftool to deploy an OVA directly to vCloud Director. The way this works is that the OVF is extracted out as part of the upload process from the OVA (not additional space required on the client side) prior to uploading to vCloud Director. This removes the manual two step process today which is to convert or extract the OVF from the OVA and then upload to vCloud Director.
The syntax for the ovftool to import to vCloud Director is as follows:
ovftool [OVA-FILE] [VCLOUD-LOCATOR]
Note: You can find examples of the vCloud locator syntax in the ovftool documentation.
Here is an example of uploading the recent vCO 5.1u1 OVA to a vCloud Director 5.1 environment using the ovftool on Mac OS X:
"/Applications/VMware Fusion.app/Contents/Library/VMware OVF Tool/ovftool" --acceptAllEulas /Users/lamw/Desktop/vCO_VA-188.8.131.52-1070383_OVF10.ova "vcloud://username:password@vcloud-director-ip-or-hostname:443?org=TechMarketing&vappTemplate=vCO-5.1u1&catalog=WorkInProgress&vdc=TM-Allocation1-ovDC"
Here is a screenshot of the upload:
The result of the above command is a new vAppTemplate called vCO-5.1u1 in the Catalog as well as a deployed vApp in my workspace. You can also opt out of deploying the vApp to the workspace by specifying –vCloudTemplate to be true.
If we login to our vCloud Director instance, we can see that our vAppTemplate has been deployed:
The nice thing about this solution is you can keep all of your OVA files for ease of distribution without having to convert them to an OVF. If you wish to convert your OVA files to OVF, you do not need to deploy to a vSphere environment first and then export it back out to an OVF, you can just use the ovftool to help with the conversion.
The syntax to convert an OVA to OVF is as follows:
ovftool [OVA-FILE] [OVF-FILE]
Here is an example of converting our vCO OVA to an OVF:
"/Applications/VMware Fusion.app/Contents/Library/VMware OVF Tool/ovftool" --acceptAllEulas /Users/lamw/Desktop/vCO_VA-184.108.40.206-1070383_OVF10.ova /Users/lamw/Desktop/vCO-5.1u1.ovf
Here is a screenshot of the conversion process:
Get notification of new blog postings and more by following lamw on Twitter: @lamw
I covered some basics on Multicast in the last blog entry here. Let’s now take a look how multicast is utilized in VXLAN deployments. During the configuration of VXLAN, it is required to allocate a multicast address range and also define the number of logical Layer 2 networks that will be created. For more details on the configuration steps please refer to the VXLAN Deployment Guide.
Ideally, one logical Layer 2 network is associated with one multicast group address. Sixteen million logical Layer 2 networks can be identified in VXLAN, using 24 bit field in the encapsulation header, but the multicast group addresses are limited (220.127.116.11 to 18.104.22.168). In some scenarios it might not be possible to have one to one mapping of a logical Layer 2 network to multicast group address. In such scenarios the vCloud Networking and Security Manager maps multiple logical networks to a multicast group address. After the discussion on the association of multicast group to logical network, let’s take a look at some details on the logical network properties.
Logical Layer 2 networks can span across all the hosts managed by a vCenter Server. Virtual machines connected to a logical network behave as if they are connected to a single broadcast domain (equivalent to same VLAN). For example, in the diagram below, VXLAN 5001 is a logical Layer 2 network that spans across four hosts. Virtual machines running on Host 1 and Host 4 are connected to same logical network (VXLAN 5001).
The question is how broadcast traffic is handled on the logical network. Any broadcast packet from a device connected to the logical network should reach all the devices on that network. For example, in the diagram below, if virtual machine 1 on Host 1 sends a broadcast packet that packet has to reach virtual machine running on Host 4. As you can see the packet has to traverse through the VTEPs and the physical network to reach the virtual machine running on Host 4.
There are few communication options (as discussed in part 2) available to VTEP on the Host 1 when it comes to delivering broadcast packet from the logical network. It can use unicast, broadcast or multicast. Multicast is much more efficient in utilizing the resources of the physical network and it is used when sending broadcast packet from the logical network.
Let’s take a look in detail how the packet flows through the VTEP and the physical network.
We will take the same example of one logical network spanning across 4 Hosts. The physical topology provides a single VLAN 2000 to carry VXLAN transport traffic. In this case only IGMP snooping and IGMP querier is configured in the physical network. As we saw with multicast operation here , few things have to happen first before the physical network devices handle multicast packets.
In the diagram above the blue circles with number indicates the packet flow:
- Virtual machine (MAC1) on Host1 is connected to the logical Layer 2 Network VXLAN 5001 and is powered on.
- VTEP on Host 1 sends a IGMP join message to the network and joins the 22.214.171.124 multicast group address that is associated with VXLAN 5001 logical network
- Similarly, virtual machine (MAC2) on Host 4 is connected to VXLAN 5001 and is powered on.
- VTEP on Host 4 sends a IGMP join message to the network and joins the 126.96.36.199 multicast group address that is associated with VXLAN 5001 logical network
The Host 2 and Host 3 VTEPs don’t join the multicast group address because they don’t have any virtual machines running those are connected to VXLAN 5001 logical network. This is where the multicast optimization comes into play. Only VTEPs that are interested in listening to multicast group traffic, joins the group.
When a broadcast packet is generated by virtual machine on Host 1 this is how the packet flows through the physical topology and is delivered to the virtual machine running on host 4.
The following is the flow of packets:
- Virtual machine (MAC1) on Host1 generates a broadcast frame.
- VTEP on Host 1 encapsulates this broadcast frame into a UDP header with destination IP as multicast group address 188.8.131.52.
- Physical network delivers the packet to the Host 4 VTEP, because it had joined the multicast group 184.108.40.206. The Host 2 and 3 VTEPs will not receive the broadcast packet.
- VTEP on Host 4 first looks at the encapsulation header and if the 24-bit value of VXLAN identifier matches with the logical Layer 2 network ID, it removes the encapsulation header and delivers the packet to the virtual machine.
This is how multicast is used to deliver the broadcast traffic generated on any logical network. The other two traffic types on the logical network that will make use of multicast on the physical network are:
- Unknown Unicast frames
- Multicast frames from virtual machine
All other types of communication on the logical network is handled through normal unicast path in the physical network.
One of the question I always get is what happens if multiple logical networks are mapped to a single multicast group address and what are security and performance implication on that type of configuration. In the next post I will cover that.
Please let me know if you have any questions on these packet flows.
Get notification of these blogs postings and more VMware Networking information by following me on Twitter: @VMWNetworking