VMworld 2011 Video – Mythbusters Goes Virtual



Due to the popularity of the “Mythbusters Goes Virtual” session and survey results, it will be repeated Thursday at 12:30 the session it will take place in San Polo 3404.

When you run across Mythbusters Goes Virtual, keep in mind I will guarantee that after attending this session, you will take your notes and change a thing or two in your own data-center – promise :-) Please join Mattias Sundling and Eric Sloof Thursday at 12:30

Some things never change, or do they? vSphere is getting new and improved features with every release. These features change the characteristics and performance of the virtual machines. If you are not up to speed, you will probably manage your environment based on old and inaccurate information. The vMythbusters have collected a series of interesting hot topics that we have seen widely discussed in virtualization communities, on blogs and on Twitter. We’ve put these topics to the test in our lab to determine if they are a myth or not.

PCoIP Improvements in VMware View 5.0

PCoIP Improvements in VMware View 5.0

PCoIP is VMware View's VDI display protocol and one of its key repsonsibilities is capturing the remote desktop's AV output and conveying it to the user's client deivce.   With VMware View 5.0 we introduce a variety of important optimizations to the PCoIP protocol that deliver a significant reduction in PCoIP's resource utilization, benefiting users in almost all usage scenarios. Broadly speaking these optimizations fall into two broad categories, bandwidth optimizations and compute optimizations, which are now discussed in more detail.

Bandwidth Improvements

Controlling network bandwidth utilization is obviously a key consideration for VDI display protocols. This is especially true in the WAN environment, where network bandwidth can be a relatively scare and highly shared resource. View 5.0 makes significant improvements in the efficiency with which PCoIP consumes this resource, while maintaining user experience. In many typical office/knowledge worker environments, bandwidth consumption is reduced by up to 75% (a 4X improvement). In the following section, the optimizations that deliver these gains are discussed.

Lossless codec

In the VDI environment, a users’ screen is frequently composed of many forms of content, including icons, graphics, motion video, photos and text. It is the responsibility of the VDI display protocol to actively monitor the type of content the user is viewing and dynamically manage the compression algorithms utilized for each screen region to ensure the best user experience. For instance, naively applying lossy compression techniques to text-orientated content can result in blurred text edging, which can be very noticeable to users. Accordingly, PCoIP uses an efficient lossless compression algorithm that has been developed with text compression as a key consideration in order to minimize both bandwidth and CPU utilization.

With View 5.0, PCoIP debuts a major enhancement to its lossless compression codec. The improved lossless compression algorithm delivers both greater compression ratios and improved robustness. As an example, the improved algorithm delivers twice the compression of its predecessor when applied to content containing anti-aliased fonts.

If you consider the desktop belonging to the typical knowledge worker there’s frequently significant text content – text on web pages, emails, presentations and PDF documents. Accordingly, a significant proportion of the imaging data being transmitted to the client device is frequently compressed using lossless compression algorithms. As a result, View 5.0’s improved lossless compression algorithm delivers a 30% to 40% reduction in bandwidth consumption for typical knowledge worker workflows.

Client-side image Caching

Amongst its many responsibilities PCoIP is tasked with efficiently communicating desktop screen updates to the client device for local display. In many instances, only a small region of the screen may change. VDI protocols such as PCoIP perform spatial filtering and only send information related to the portion of the screen that changed (rather than naively sending the entire screen). However, in additional to spatial filtering, temporal analysis can also be performed. For instance, consider minimizing an application, dragging a window, flicking through a slide-set or even scrolling through a document. In all these examples, each successive screen update will be largely composed of previously seen (potentially shifted) pixels. As a result, if the client device maintains a cache of previously seen image blocks, PCoIP can deliver significant bandwidth savings by merely encoding these portions of the screen update as a series of cache indices rather than retransmitting the blocks.

View 5.0 introduces a client-side image cache, providing bandwidth savings of 30% in many instances (typical knowledge workers flows). This cache is not merely a simple fixed position cache, but captures both spatial and temporal redundancy in the screen updates.

Total Bandwidth Improvements

In combination the compression improvements and image caching deliver bandwidth savings of around 60% (a 2.5X improvement) out-of-the-box in both LAN and WAN use cases for typical knowledge workers.

Additional bandwidth improvements can be obtained in View 5.0 by leveraging the new image quality controls that have been introduced. By default, PCoIP will build to a lossless image – when a screen update occurs, PCoIP will almost immediately transmit an initial image for display on the client. In rapid succession PCoIP will continue to refine the client’s image until a high quality lossy image is achieved. In PCoIP vernacular, this is termed building to a “Perceptually lossless” image. If the screen remains constant, PCoIP will, in the background, continue to refine the image on the client until a lossless image is obtained (i.e. PCoIP builds to lossless (BTL)). In certain application-spaces building to a lossless image is a key feature. However, for many knowledge workers, the BTL support can be disabled without impact on image quality. And disabling BTL can deliver significant bandwidth savings -- in many situations disabling BTL will provide up to around 30% reduction in bandwidth.

Combining the compression improvements, client caching and disabling BTL commonly delivers a bandwidth improvement of up to 75% (a 4X improvement), for typical office workloads!

CPU Improvements

In VDI environments, desktop consolidation is a key consideration. The more user desktops that can be handled per system (i.e. the higher the consolidation ratio), the better the cost savings that can be realized. Accordingly, the CPU overheads introduced by the VDI protocol must be carefully constrained. With View 5.0, PCoIP has been further enhanced to minimize its CPU overhead in a number of significant ways.

Idle CPU usage

From the VDI protocol’s perspective, unless the user is viewing a video, the user is idle for a large proportion of the time. For instance, if a user loads a new web page, there is a flurry of activity as the web page loads and the screen update is displayed, but many seconds or even minutes may elapse with the screen remaining static, as the user reads the content of the page. For a VDI protocol, it is not only important to encode any screen changes efficiently, but to minimize the overheads associated with all background activities that occur during these idle periods.

With View 5.0, we have significantly optimized these code paths, and PCoIP’s idle CPU consumption is now negligible. Further, the session keep-a-live (aka heartbeat) bandwidth has been reduced by a factor of 2, for many workloads.

Optimized algorithms and code

In View 5.0, many of the hottest image processing and compression functions have been reexamined, their algorithms tweaked for efficiency and their implementation further optimized – in some situations, even coded in assembly to realize the absolute lowest computational overheads.

Effectively using Hardware Instructions

Image manipulation operations are typically suitable to acceleration via the use of SIMD (Single Instruction Multiple Data) instructions, such as the SSE instructions supported on recent x86 processors. With View 5.0, PCoIP has been optimized to take even greater advantage of the SSE SIMD support available on x86 processors, not only providing an expanded coverage of the code base, but also, when available, leveraging the SSE instructions available on the very latest processors (e.g. SSE 4.2 and AES-NI).

Conclusion

In conclusion, with the introduction of View 5, we have spent significant time further optimizing PCoIP to furher reduce both its bandwidth and CPU consumption, delivering improved responsiveness, improved consolidation ratios and improved WAN scalability.

 

       Download VMware Products  | Privacy  | Update Feed Preferences 
        Copyright © 2010 VMware, Inc. All rights reserved.

Towards Virtualized Networking for the Cloud

Towards Virtualized Networking for the Cloud

Steve_Herrod
Posted by Steve Herrod
Chief Technology Officer

VMworld 2011 is well-underway with more than 19,000 attendees gathered in Las Vegas to learn about, celebrate, and drive the future of both virtualization and cloud computing. The amount of news has been staggering, but I want to take more time to focus on one particularly important announcement in this blog; a new vision and approach for networking in the cloud era.

Cloud computing holds the promise of accessing shared resources in a secure, scalable, and self-service manner, and these core tenets place huge demands on today’s physical network infrastructure.  While compute and storage are virtualized, network is still a physical impediment to full workload mobility and can inhibit multi-tenancy and scalability goals. Even with VLAN technologies, the network continues to restrict workloads to the underlying physical network and to non-scalable, hard-to-automate constructs.

Have we seen this before?

I like to think about this problem as similar to one we’ve previously seen in the telephony industry. One of the fundamental challenges with today’s networking is that we use an IP address for two unrelated purposes, as an identity AND as a location. Tying these together restricts a (virtual) machine from moving around as easily as we would like. We had the same challenge with telephony before wireless came of age… our phone number rang for us at a specific destination rather than following us wherever we went!

Image001

Just as our mobile phone numbers allow us to take calls virtually anywhere, separation of a machine’s network ID from its physical location enables more mobility and efficiency for applications. And this is exactly what we’re after in the cloud… a model that enables the efficient and fluid movement of virtual resources across shared cloud infrastructures both within and across datacenters. This improved mobility will ultimately enable better approaches to load balancing, disaster recovery, power-usage optimization, datacenter provisioning and migration, and other challenges approaching us in the cloud era.

Welcome VXLAN!

VMware has collaborated with Cisco and other industry leaders to develop an innovative solution to these challenges called “VXLAN” (Virtual eXtensible LAN). VXLAN enables multi-tenant networks at scale, and it is the first step towards logical, software-based networks that can be created on-demand, enabling enterprises to leverage capacity wherever it’s available. How does it work?

Using “MAC-in-UDP” encapsulation, VXLAN provides a Layer 2 abstraction to virtual machines (VMs), independent of where they are located.  It completely untethers the VMs from physical networks by allowing VMs to communicate with each other using a transparent overlay scheme over physical networks that could span Layer 3 boundaries.  Since VMs are completely unaware of the physical networks constraints and only see the virtual layer 2-adjacency, the fundamental properties of virtualization such as mobility and portability are extended across traditional network boundaries. Furthermore, logical networks can be easily separated from one another, simplifying the implementation of true multi-tenancy.

And VXLAN enables better programmability by providing a single interface to authoritatively program the logical network. Operationally, it will provide the needed control and visibility to the network admin while allowing the flexibility of elastic compute for the cloud admin.

And VXLAN can be implemented to be very efficient and resource savvy. We take advantage of efficient multicast protocols for the VM’s broadcast and multicast needs. We leverage Equal-Cost Multi-path (ECMP) in the core networks for efficient load sharing. And within the virtualized environment we leverage vSphere’s DVS, vSwitch, and network IO controls to ensure the VMs get the bandwidth and security that they require. Cisco will certainly leverage the N1000V switch as one key place for VXLAN implementation, and other partners will soon announce their approach as well.

A Collaboration

VMware has collaborated closely with Cisco and industry leaders including Arista, Broadcom, Brocade, Emulex, and Intel in making this an industry-wide effort and to ensure a seamless experience across virtual and physical infrastructure. As part of this effort, we have published an informational IETF draft (see http://www.ietf.org/id/draft-mahalingam-dutt-dcops-vxlan-00.txt) to detail the use case and the technology. To achieve its full potential, VXLAN must be adopted across the industry, and we’re committed to helping this happen in an open and standards-compliant way.

In Closing… 

VXLAN is the flagship in a growing set of capabilities that deliver a new model of networking for the cloud. For some additional context, be sure to check out Allwyn’s blog on logical networks from May. It addresses the physical limitations associated with today’s networking infrastructures in an evolutionary way, and offers a model that enables the efficient and fluid movement of virtual resources across cloud infrastructures. And what’s more, it does so in an evolutionary way that leverages today’s network infrastructure investments. Stay tuned for even more updates on this exciting new development!

 

 

 

 

 

 

 

       Download VMware Products  | Privacy  | Update Feed Preferences 
        Copyright © 2010 VMware, Inc. All rights reserved.

iPhone prototype N94 leaked, possibly to be an iPhone 4S

There are lots of crazy rumors flying around about the new iPhone announcement coming later on this year (or as soon as next month), but this is one of the more credible. The picture above comes from a site called UbreakIfix through our friends at Engadget, and purports to show a prototype for the iPhone called the N94. It's a long story (which you can click through to read yourself), but essentially, the latest rumor says that this device is the latest testing version of what may become an "iPhone 4S," a slightly cheaper version of the iPhone 4 set to be introduced right alongside the iPhone 5.

Of course, these are all still rumors, and Engadget admits the timing isn't quite right on this one -- this is apparently an "Engineering Verification Test" piece from last March, which makes it older than some of the other prototypes that have reportedly leaked out. It's unknown whether this is the real thing or just another test unit.

But the wheels are clearly in motion on a new iPhone. And if the rumors play out as predicted later on next month, we might see not one but two new iPhones available for purchase.

iPhone prototype N94 leaked, possibly to be an iPhone 4S originally appeared on TUAW - The Unofficial Apple Weblog on Mon, 29 Aug 2011 16:30:00 EST. Please see our terms for use of feeds.

Source | Permalink | Email this | Comments

Passed VCP5 – thoughts

I did a small yet vague (has to be) writeup here: http://www.vjason.com/2011/08/29/vcp5-exam-passed/

 

My overall opinion is that if you felt like you could pass the VCP4 exam today, and you spent a good 12-16 hours going through vSphere 5 features, setup, maybe some upgrades you would be well on your way to passing the VCP5. Helping me out is 3.5 total years of vSphere experience, ~1.5 with 4.X and 2 with 3.5, including deployment and upgrades in both cases.

 

If all you have done is administer existing farms I recommended that you set up a lab and go through all the heavily used features such as DRS, HA, FT, VUM (including host upgrades), some vCLI interaction, distributed vswitches, resource pools, shares, limits, vApps, and so on. Worried about not having any shared storage experience? Download a EMC VNX VSA appliance here and you'll be able to mimic a very real world setup from the hosts to the storage (NetApp makes a VSA as well but is isn't publicly available). Refer to the exam blueprint if you need ideas of additional topics to study.

 

Having taken both the 410 and 510 exams I can say that they felt similar. Of 85 questions I marked 10 for review, and upon review I felt confident of my answers for almost all of them. I spent about 90 minutes in total on the exam.

 

My 2 cents: The exam is fair and I suspect that over the next few months we will see a flood of VCP5's.

 

- Jason

VMworld Labs

If you have ever attended VMworld before – you’ll know that one of the top experiences are the VMware Labs. You know there’s an awful lot of talk and powerpoint at these kind of the events, and the VMware Labs offer an excellent chance to escape hot-air, and actually get your hands dirty with the technology. I think I will be doing a number myself as part of my research for the “Hotel California” book. It’s going to have a lot of technologies I’ve never ever touched before, and I see the VMware Labs as a good opportunity to dip my toe in the water – so to speak.

All labs are self-paced and walk-up. So there aren’t massive lines with labs only starting at set times. That’s a thing of the past. The labs are delivered though a dedicate “lab cloud”, and the lab guys who build this stuff are real pioneers within VMware. For example they were one of the first groups to use “nested ESX” – ESX inside a VM, to ease the spinning up of various environments. So if you like VMware Labs is a real example of VMware “eating its own dog food”…

Here’s a graphic that shows how the Labs are being delivered to Las Vegas…

You can see VMworld Labs by solution here: http://www.vmworld.com/community/conference/us/learn/sessions-labs

Here’s some exclusive behind the scenes views of the VMware Labs being built out… Great job, guys!

After doing my key labs I’m hoping to catch-up with subject matter experts at the Genius Bar. Despite my rather good contacts with VMware my links tend to be restricted to certain product silos – such as vSphere, View, SRM, vCD and vCHBS. But I know there are vast swathes of the company that I have little or no contact with. So I’m seeing this as chance to pick up a few business cards – to act as the first contact with those different product groups.