What is Composable Infrastructure and where is it targeted?

Last month, just before the flooding, I was onsite at HPE in Houston, TX, for a Tech Day focused on the HPE Hyper Converged and Composable Infrastructure portfolio.  I have since had a chance to reflect on everything I had learned about HPE’s Composable strategy – and the larger industry direction of containers and orchestration are it is all taking us.  There are potential benefits and lots of hurdles in both of these major initiatives in the industry.  The reason behind these concepts and solutions is delivering faster results and flexibility for IT organizations.  And while at the Tech Day, I was able to get dig in again and refresh what I’d heard and learn more detail around HPE’s Composable strategy.

What is Composable Infrastructure?

I’ve had the opportunity to drill into HPE’s and the broader industry definition of Composable Infrastructure a couple times in the past year.  HPE released Synergy Platform last December at HP Discover in London.  This was the first purpose-built hardware platform for composability, but the people I talked with in HP warned me that it would not the only platform.

Composability is the concept of being able to take standard compute, storage and networking hardware and assemble them in a particular way using orchestration to meet a specific use case.   When your use case is completed, you can decompose and recompose the same components in a different way to meet new needs.  Beyond that point, it also includes the concept of being able to do this with hardware on demand as your usage and requirements change throughout the day.   For peak work loads, you may need to compose infrastructure to run a billing cycle and then decompose and recompose the same to be web front-end servers for E-Commerce during a peak usage – like Black Friday.  The key here is orchestration and with the orchestration, speed.

In many ways, Composable Infrastructure provides a pool of resources similar to how virtualization provides access to a pool of resources that can be consumed in multiple ways.  The critical difference is that these are physical resources being sliced and diced without a hypervisor layer providing the pooling.

Characteristics of Composable Infrastructure

HPE has a strict definition they are following for Composable Infrastructure.  If any of these characteristics are missing, they will not classify a solution as Composable, no matter how close it may resemble it.

  • Unified API
    • Single line of code to abstract every element of the infrastructure for full infrastructure programmability
    • Bare-metal interface of infrastructure as a service
  • Software-Defined Intelligence
    • Template driven workload composition
    • Automation to enable streamlined operation of the system (HPE calls it frictionless operation)
  • Fluid Resource Pools
    • Single infrastructure of disaggregated pools
    • Physical, virtual and containers
    • Auto-integrating of resource capacity

The API is straight forward, along with the ability to use templates to define and automate system builds.  Fluid resource pools also makes sense if you’ve spent any time with a virtualization technology.  The point that didn’t immediately make sense is the ‘frictionless operations.’

In terms of frictionless operations, what HPE is talking about today is automation and workflow tools within the management interface to streamline the upgrade processes required on the system.  Those may include upgrades in the OS images and it may include firmware and driver bundles along with the management interface rollups.

Self-Healing Infrastructure

Now, take this concept & definition a step forward and layer on monitoring and mitigation software.  What becomes possible is a self-healing infrastructure, reacting to events and remediating them on its own.  Self-healing is a compelling concept, but so far attempts have been far from exceptional and its really tough to achieve.  There is an huge amount of work required in standardization and orchestration that just don’t fit with traditional IT software and OS concepts.  Where I say traditional, HPE is using the word legacy or old-style.  They’re talking client-server applications – the commercial, off-the-shelf software so many Enterprises are running today.

But looking into the future, HPE can see that self-healing can be realized with cloud-native, scale-out software solutions.  And it is betting that if it can build a physical infrastructure capable of programmatically assembling and disassembling systems on-demand, that is can power this self-healing future.

HPE Synergy Platform

HPE Synergy Platform is the hardware HPE believes will power the future of on-premises, cloud-native IT along with the flexibility to host legacy applications in the same pool of resources.

While examining Synergy Platform at Discover, I first noticed that the hardware platform looks very similar to a HPE BladeSystem chassis.  It is different in the number of compute nodes and the dimensions of those compute nodes and the networking.  But physically, it looks very much like a BladeSystem.  What occurs quickly to you is that you have new capabilities like adding a disk shelf that spans two compute bays and provides a pool of disk that can be shared throughout a Synergy Frame (not called chassis anymore).

HPE-Synergy-with-Composer-Storage-Module-and-480-660-620-and-680-Compute-Modules_low

Compute

Blades become compute nodes, but the concept is much the same.  The form factor does not change much – you have half-height and full-height options for compute nodes in Synergy.  The dimensions are larger to accommodate a fuller range of hardware inside of the compute node.  Starting with a half-height blade, HPE is offering the Synergy 480 Compute Node.  It is a similarly equipped Gen9 server to meet similar use cases as the BL460c.

Another change in Synergy, the slots for compute nodes no longer restrict you to a single unit – there is no metal separation between the compute bays.  You can do double width units in Synergy, in addition to full-height.  HPE is making use of that with a double width and full height Synergy 680 Compute Node.  The 680 is an impressive blade with up to 6TB of RAM across 96 DIMM slots and quad sockets.  It is a beast of a blade.  Other full-height option are the Synergy 620 and 660 Compute Nodes.

Composer & Image Streamer

Orchestration is really what sets Synergy Platform apart from the c7000 blade frames from HP.  The orchestration is achieved by modules on the Synergy Platform management modules.

First, the Synergy Composer is the brains and management of the operation and it is built on HPE OneView.  From an architecture stand-point, a single Composer module can manage up to 21 interlinked frames.  Each frame a two 10Gig management connections – on a Frame Link Module – that can be link frames together.  Each frame connects upstream to a frame and downstream to another and this forms a management ring.  Using this management network, the Synergy Composer is able to manage all 21 frames of infrastructure.  Although each frame has a slot for a Composer management module, only one is required and a second can be added in a different frame to establish high-availability.

Synergy Image Streamer is all about the boot images.  You take a golden image, you create a clone-like copy (similar to VMware’s linked clones) and you boot it, when the image is rebooted, nothing is retained.  Everything about the image must be sequenced and configured during boot.  Stateless operation is very much a cloud-native concept – requiring the additional services to be deployed in the environment – like centralized logging – to enable the long-term storage of data from the workloads.  Composer also takes into account updates and patching by allowing the administrator to commit these to the golden master and then kick off a set of rolling reboots to bring all the running images up to date.  Just like Composer, the Image Streamer also only needs a single module or two for redundancy in two different frames.

Management in Synergy is made to scale and eliminate the need for onboard management modules and a separate software outside of the hardware to manage many units.

OS Support for Streaming and Traditional OS Support

At launch, Synergy’s Image Streamer supports Linux, ESXi and Container OSes with the full benefit of image, config and run.  The images are stateless, meaning nothing is retained when the system reboots.  Windows was not supported with the Image Streamer as of December.  The Synergy and composable concept is clearly targeted at bare-metal deployments of cloud-native systems.

Now, even though you can’t use the Image Streamer to run Windows or stateful Linux on Synergy, it doesn’t mean they wont’ run.  It is still possible to create a boot disk and provision it to a compute node (boot-from-san, boot from USB or SD, etc.).  For compatibility, you can use Synergy compute note like a traditional rack-mount or blade server.  Of course, when you do, you lose all of the potential benefits of the platform from its imaging and automation engines.

The Fabric

The Synergy Fabric the other huge differentiating factor with Synergy Platform.  A single frame is probably not what you’ll see deployed anywhere.  Synergy is built to scale up within the rack – with up to 5 frames interconnected to the same converged fabric that provides Ethernet, Fibre Channel, FCoE and iSCSI across all of the frames.  Synergy uses a parent/child module to extend the managed fabric across multiple frames.  A parent module is inserted in one frame and child modules in up to 4 additional frames.  Similar to Composer and Image Streamer modules, the interconnect modules on Synergy uses a pair of parent modules in different frames to achieve high availability.  Management of the interconnect modules and fabric is a single interface and utilizes MLAG connections between the modules to communicate management changes.

Shared Local Disk

StoreVirtual is a primary use case with Synergy Platform and HPE expect many users will choose their VSA to do software defined storage in Synergy.  But it is hard to get enough disks to matter in the blade form factor.   To allow for this, HPE is also showing off a new disk shelf that can fit into two compute bays on a frame.  The HPE Synergy D3940 disk shelf can hold up to 40 small form factor disks that can be carved out and consumed by any of the compute nodes in the frame.  One important limitation here, however, is that the local disks are only accessible inside of a single frame.  The SAS module used for these disks is in a separate bay and is separate from the fabric.  So all the disks need to be presented to compute within the frame.

However, StoreVirtual to the rescue.  Either bare metal StoreVirtual or more likely as a VSA, the StoreVirtual can take those local disks and present them in a way that they can be consumed or clustered with compute in other frames.  All 40 disks may be presented to a single StoreVirtual instance and then clustered with 2 additional StoreVirtual instances in other frames – and then the fabric can consume the storage from StoreVirtual. The great thing is you have choice as a consumer to instantiate and then recompose these resources as needed on Synergy.

Who benefits from these characteristics?

It is critically important to note that you can run traditional workloads side-by-side with cloud-native workloads.  While cloud-native benefit more from the Image Streamer and account for stateless OS operation, traditional IT workloads can be run on the same hardware.  From the Composer, you can assemble a traditional server, with boot from SAN or boot from local disk in the compute node and run a client-server application.  This flexibility is important as organizations attempt to build the new style apps but need to support existing applications.  It means that a single hardware platform will be able to deliver both.  Unfortunately, organizations that choose to run legacy systems on Synergy won’t be able to fully realize all the benefits Synergy Platform has to offer since most do not apply to these legacy workloads, but having the flexibility is key.

From my viewpoint, no company is going to buy into Synergy simply to run legacy applications on it.  The real benefits are for companies that are planning or in the middle of application transition.  For companies who have not begun the transition process, Synergy and Composable Infrastructure is going to be a tough sell – because they’re not thinking in ways where the benefits Synergy delivers matter.

Fascinating EMC World 2016 vLab stats

I’m always curious about what people are interested in – and they vote with their feet and their dollars.  One measure is the hands-on-labs at events like EMC World, VMworld and others.

So – here are the Hands-on-Labs stats from EMC World 2016 (thanks to the EOS2 vLab team!) – and remember – these are all available to you post-EMC world (see this post here)

 

image

No surprise that Unity and VxRail figured highly, but really glad to see Recoverpoint for VMs up there (a great product, completely under-rated, not enough people know about it).

The fascinating one is Docker, Mesos and ScaleIO – check that out!!

 

image

… and continued…

image

We also do guided labs – again, notice the pattern of what people are interested in…

image

Thanks everyone for participating!!!

ownCloud Desktop Client 2.2.0 Available

flexible client usageToday the desktop team has made available version 2.2.0 which will notify you of server events and sync issues, while also introducing many performance and reliability improvements. Read on to learn what is new!!!

Features

The most notable improvement in this release is support for server notifications on the client. When, for example, a new share can be accepted by the user or when the system administrator wants all users to know about a scheduled maintenance window, the client will show a desktop notification.

Another addition to the notifications is a warning of a detected conflict or sync problem. This enables the user to take direct action rather than having to find out much later or not at all that one or more files were not synced as expected.

Other UI improvements include showing avatars and an activity spinner in the sharing UI in file managers and a simplified sync folder creation dialog.

Faster and more reliable

A recent new feature on the server and client was the introduction of checksums which allow the client to verify if files were correctly uploaded and downloaded. This client release will check if the server correctly supports this feature and, if so, use it.

Other changes make syncing more reliable in specific situations. The client will now sync immediately after a lock was released on Windows and deal better with locking in Windows and on networks in general. Handling errors with storage located on USB devices was improved and the Ubuntu 16.04 icon tray now works. The new client also warns for older server versions it can’t work with. Many more smaller fixes were implemented and in security-related features, the client now supports the Windows credential store.

A number of performance improvements were implemented like speeding up handling of file overlay icons, faster uploading of small files as well as very large ones.

Find a more complete overview of changes in the desktop client changelog and grab the 2.2.0 version for your operating system from our installation page.

My VMworld 2016 Sessions

Hey, another VMworld is almost upon us, and public voting has just opened. This is your chance to make VMworld 2016 what you want it to be.

I just want to shamelessly plug two sessions that I am involved in this year, so you can all go and vote for them! Don’t forget to vote for all of the other great session that have been proposed.

#8909 Common Mistakes VCDX Candidates Make – The Panelists View

VMware Certified Design Expert (VCDX) candidates all invest a lot of time researching what they need to do to pass the VCDX certification, from preparing their submission to actually defending in front of the panel. There are great VCDX mentors in the community that can give you tips on how they passed, but rarely will you hear about mistakes candidates have made. This is your opportunity! Join two experienced VCDX panelists who have sat through countless panels, seen the mistakes that are repeated from candidate to candidate and whose insights will improve your VCDX journey.

#8648 Architecting VSAN for Horizon – the VCDX way!

Mware Horizon is a proven desktop virtualization solution that has been deployed around the world. Balancing the performance and cost of a storage solution for Horizon can be difficult and affects the overall return on investment. VMware’s Virtual Storage Area Network (VSAN) has provided architects with a new weapon in the battle for desktop virtualization. VSAN allows architects to design a low-cost high performance hybrid solution of solid state and spinning disks or all-flash for the ultimate desktop performance. VSAN now includes features such as dedupe, compression and metro clustering which provides greater options to fit your use cases. Learn from two Double VCDX’s on how to architect Horizon on a VSAN solution to provide the levels of performance your user’s need, with management simplicity that will keep your administrators happy and at a cost that will ensure your project will be a success.

VMware Virtual SAN 6.2 Network Design Guide

Virtual SAN is a hypervisor-converged, software-defined storage solution for the software-defined data center. It is the first policy-driven storage product designed for VMware vSphere® environments that simplifies and streamlines storage provisioning and management.



Virtual SAN is a distributed, shared storage solution that enables the rapid provisioning of storage within VMware vCenter Server™ as part of virtual machine creation and deployment operations. Virtual SAN uses the concept of disk groups to pool together locally attached flash devices and magnetic disks as management constructs.

Disk groups are composed of at least cache device and several magnetic or flash capacity devices. In Hybrid architectures, flash devices are used as read cache and write buffer in front of the magnetic disks to optimize virtual machine and application performance. In all flash the
cache device endurance is leveraged to allow lower cost capacity devices.

The Virtual SAN datastore aggregates the disk groups across all hosts in the Virtual SAN cluster to form a single shared datastore for all hosts in the cluster. Virtual SAN requires correctly configured network for virtual machine I/O as well as communication among cluster nodes. Since the majority of virtual machine I/O travels the network due to the distributed storage architecture, highly performing and available network configuration is critical to a successful Virtual SAN deployment.

This paper gives a technology overview of Virtual SAN network requirements and provides Virtual SAN network design and configuration best practices for deploying a highly available and scalable Virtual SAN solution.

New Book – PowerCLI Essentials

Have you ever wished you could automatically get a report with all the relevant information about your VMware environments in exactly the format you want? Or that you could automate a crucial task that needs to be performed on a regular basis?

Powerful Command Line Interface (PowerCLI) scripts do all these things and much more for VMware environments. PowerCLI is a command-line interface tool used to automate VMware vSphere environments.

It is used to handle complicated administration tasks through use of various cmdlets and scripts, which are designed to handle certain aspects of VSphere servers and to help you manage them.

This book will show you the intricacies of PowerCLI through real-life examples so that you can discover the art of PowerCLI scripting. At the start, you will be taught to download and install PowerCLI and will learn about the different versions of it.

Moving further, you will be introduced to the GUI of PowerCLI and will find out how to develop single line scripts to duplicate running tasks, produce simple reports, and simplify administration. Next, you will learn about the methods available to get information remotely.

Towards the end, you will be taught to set up orchestrator and build workflows in PowerShell with update manager and SRM scripts.

  • Download and install PowerCLI and its basics as well as the basics of PowerShell
  • Enchance your scritping experience
  • Build longer scripts and simpler reports
  • Relate a task in VMware administration to a PowerCLI script
  • Discover methods to acquire and change information remotely

Set up orchestrator to manage your workflow

Vote now for your fav VMworld sessions and Virt Blogs!

Advertise here with BSA


Time to get voting again. Eric Siebert just published the new list of Top Blogs to vote for. I’ve been honoured to have won a couple of years in a row, but with so many great blogs on the list that is never a given. Personally I just put in my votes, I am not going to tell you who I voted for this year as I don’t want to influence anyone. I hope I produced sufficient useful content again this year for you guys to considering voting for me. I am not going to point you to my articles, as most of you will have fav articles and know what you appreciate and don’t appreciate. Vote here: http://vsphere-land.com/news/voting-now-open-for-top-vblog-2016.html

Also, voting for VMworld is open. I would like to ask you guys to considering voting for the sessions I will be part of. Here is a list of the sessions, and if they sound interesting please consider voting:

  • Ask the Expert vBloggers [7515] (Rick Scherer, Chris Wahl, Chad Sakac, Derek Seaman, Duncan Epping)
    Back for it’s 9th year at VMworld, Ask the Experts is back with an awesome panel of the industries top bloggers. In this session there are no powerpoints, no sales pitches and no rules! Experts in the industry are here to answer the audiences questions while having some fun in the process. Bring your topic, anything from Software-Defined Data Center, End-User Computing, Cloud Native Applications to Hybrid Cloud…Storage, Networking, Security, Applications. No Holds Barred and No Questions are Off Limits!
  • Enforcing a vSphere Cluster Design with PowerCLI Automation [8036] (Chris Wahl – Duncan Epping)
    The amount of vSphere data center, cluster, storage, and network options available to an administrator are massive. Even specific features such as High Availability (HA) and the Distributed Resource Scheduler (DRS) have a ton of different configuration settings provided to meet the needs of your specific virtualized workloads. It can be a challenge, however, to audit the settings and best understand why they were set in the first place – especially as other administrators join and leave your organization! Join VMware’s Duncan Epping, author of the infamous Cluster Deep Dive series, along with Rubrik’s Chris Wahl, PowerShell MVP and author of Networking for VMware Administrators, as they take a look at how to abstract vSphere configuration settings into declarative configuration files. Learn how to audit, track, and enforce consistent settings – with notes and comments – to ensure the availability and standardization of your vSphere environment.
  • Software Defined Storage @ VMware Primer [7650] (Lee Dilworth – Duncan Epping)
    In this session Lee and Duncan will give an overview of the different VMware Software Defined Storage initiatives and how these fit in to the broader SDDC picture. They will cover Virtual Volumes, Virtual SAN and the vSphere APIs for IO Filtering. For each of these 3 they will explain customer use cases and go over some of the basic concepts providing you with a good understanding of how to apply this to your environment.
  • A day in the life of a VSAN I/O [7875] (John Nicholson – Duncan Epping)
    In this session Duncan and John will discuss what a typical day looks like in the life of an I/O (on a VSAN based solution). How does network based RAID-1 or RAID-5 work? Where does checksumming occur? When and how are blocks deduplicated or compressed? What about caching, are there different layers? Can I control where blocks are stored? And how does all of this influence availability and performance of my I/O?
  • Hyperconverged Infrastructure Panel – Deep Dive Review of Solutions [7765] (Stu Miniman, Steve Poitras, Jesse St Laurent, Duncan Epping)
    The hyperconverged infrastructure market has exploded over the last few years. The good news is there’s lots of choice; the bad news is there’s lots of choice. How do you navigate the vendors and solutions landscape? What are they key evaluation criteria? Join us for a moderated panel discussion with a distinguished group of hyperconvergence experts to examine and discuss the challenges IT organizations are facing with virtualization infrastructure and how hyperconverged infrastructure is playing a dramatic role in the evolution of the data center. The panel will answer your questions on topics such as what constitutes hyperconvergence in the converged infrastructure landscape; business and technical benefits of this data center technology; scenarios to leverage hyperconvergence in your organization; how to avoid traps that could derail hyperconvergence implementation; and others posed by the audience.

Thanks and hope to see you guys at the event, either US or EMEA!

"Vote now for your fav VMworld sessions and Virt Blogs!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Vote now for your fav VMworld sessions and Virt Blogs!

Advertise here with BSA


Time to get voting again. Eric Siebert just published the new list of Top Blogs to vote for. I’ve been honoured to have won a couple of years in a row, but with so many great blogs on the list that is never a given. Personally I just put in my votes, I am not going to tell you who I voted for this year as I don’t want to influence anyone. I hope I produced sufficient useful content again this year for you guys to considering voting for me. I am not going to point you to my articles, as most of you will have fav articles and know what you appreciate and don’t appreciate. Vote here: http://vsphere-land.com/news/voting-now-open-for-top-vblog-2016.html

Also, voting for VMworld is open. I would like to ask you guys to considering voting for the sessions I will be part of. Here is a list of the sessions, and if they sound interesting please consider voting:

  • Ask the Expert vBloggers [7515] (Rick Scherer, Chris Wahl, Chad Sakac, Derek Seaman, Duncan Epping)
    Back for it’s 9th year at VMworld, Ask the Experts is back with an awesome panel of the industries top bloggers. In this session there are no powerpoints, no sales pitches and no rules! Experts in the industry are here to answer the audiences questions while having some fun in the process. Bring your topic, anything from Software-Defined Data Center, End-User Computing, Cloud Native Applications to Hybrid Cloud…Storage, Networking, Security, Applications. No Holds Barred and No Questions are Off Limits!
  • Enforcing a vSphere Cluster Design with PowerCLI Automation [8036] (Chris Wahl – Duncan Epping)
    The amount of vSphere data center, cluster, storage, and network options available to an administrator are massive. Even specific features such as High Availability (HA) and the Distributed Resource Scheduler (DRS) have a ton of different configuration settings provided to meet the needs of your specific virtualized workloads. It can be a challenge, however, to audit the settings and best understand why they were set in the first place – especially as other administrators join and leave your organization! Join VMware’s Duncan Epping, author of the infamous Cluster Deep Dive series, along with Rubrik’s Chris Wahl, PowerShell MVP and author of Networking for VMware Administrators, as they take a look at how to abstract vSphere configuration settings into declarative configuration files. Learn how to audit, track, and enforce consistent settings – with notes and comments – to ensure the availability and standardization of your vSphere environment.
  • Software Defined Storage @ VMware Primer [7650] (Lee Dilworth – Duncan Epping)
    In this session Lee and Duncan will give an overview of the different VMware Software Defined Storage initiatives and how these fit in to the broader SDDC picture. They will cover Virtual Volumes, Virtual SAN and the vSphere APIs for IO Filtering. For each of these 3 they will explain customer use cases and go over some of the basic concepts providing you with a good understanding of how to apply this to your environment.
  • A day in the life of a VSAN I/O [7875] (John Nicholson – Duncan Epping)
    In this session Duncan and John will discuss what a typical day looks like in the life of an I/O (on a VSAN based solution). How does network based RAID-1 or RAID-5 work? Where does checksumming occur? When and how are blocks deduplicated or compressed? What about caching, are there different layers? Can I control where blocks are stored? And how does all of this influence availability and performance of my I/O?
  • Hyperconverged Infrastructure Panel – Deep Dive Review of Solutions [7765] (Stu Miniman, Steve Poitras, Jesse St Laurent, Duncan Epping)
    The hyperconverged infrastructure market has exploded over the last few years. The good news is there’s lots of choice; the bad news is there’s lots of choice. How do you navigate the vendors and solutions landscape? What are they key evaluation criteria? Join us for a moderated panel discussion with a distinguished group of hyperconvergence experts to examine and discuss the challenges IT organizations are facing with virtualization infrastructure and how hyperconverged infrastructure is playing a dramatic role in the evolution of the data center. The panel will answer your questions on topics such as what constitutes hyperconvergence in the converged infrastructure landscape; business and technical benefits of this data center technology; scenarios to leverage hyperconvergence in your organization; how to avoid traps that could derail hyperconvergence implementation; and others posed by the audience.

Thanks and hope to see you guys at the event, either US or EMEA!

"Vote now for your fav VMworld sessions and Virt Blogs!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Getting started with PowerCLI for vRealize Operations (vR OPs)

I recently took some time to explore the PowerCLI module for vRealize Operations Manager (vR Ops). This module was released with PowerCLI 6.0 R2 last year and I can say that after a test drive I am really impressed at the capabilities of this new module. A useful set of cmdlets are provided and the entire vR Ops public API is accessible through this module.

In this blog post, I will cover some of the basics of the module and give some examples of usage including programmatically resolving an alert condition on a virtual machine. In these examples I am using PowerCLI 6.3 Release 1 with vRealize Operations Manager 6.2. I will follow this up with a more in-depth blog post explaining how the vR Ops API can be leveraged via PowerCLI.

To begin, the available cmdlets for the module (which is named “VMware.VimAutomation.vROps”) are shown below.


These are very handy cmdlets and for common use cases like pulling statistics, alerts, resource properties and other information,as well as this there is also a way to access the entire API for vR OPs but we will cover this in a future post.

It’s easy enough to get started, just use the connection cmdlet to begin.

Once connected to the vR Ops server, you can now start exploring. Looking up active alerts may be a great place to start. If you are new to PowerCLI, your best bet is to first run “get-help <cmdlet>” to give you a start on usage. Here is a listing of active critical alerts that are impacting Health.

Notice that there are columns for Type and Subtype which can be used as input parameters for the Get-OMAlert cmdlet, yet there are cmdlets provided for those specific parameters (Get-OMAlertType and Get-OMAlertSubtype). Using those cmdlets without input parameters returns a list of all valid types and subtypes on the server.

In addition to type and subtype, you can retrieve alerts using the –AlertDefinition parameter and the cmdlet Get-OMAlertDefinition can be used to find the available alert definitions in the system. You can filter the output to show alert definitions of a given alert type and subtype, for example below is the output for Network Alerts of the subtype Configuration.

Alert definitions contain a lot of information that may be helpful and here I show the output of the cmdlet using the –name parameter with a wildcard.

Note the values for AdapterKind and ResourceKind properties. These can be used as input parameters as well. For example:

What can you do with the alerts? If I pull a single alert instance into a variable we can explore other details of the alert. Information on the status, event times and control state are available as well as other useful information.

Using the Set-OMAlert cmdlet I can take or release ownership of an alert, suspend an alert for a period of time (in minutes) or cancel the alert. For example, I can take ownership and suspend the alert I stored in $alert above.

A couple of things to note for this cmdlet is that the ownership for the –TakeOwnership property assigns the currently connected user as the owner. Also, the example above shows the optional –Confirm parameter but there is also a –WhatIf parameter to display the changes that would be made but not commit them.

Stay tuned as the next post will discuss more vR Ops Cmdlets and further automation by PowerCLI soon.


John Dias is a Staff Systems Engineer on VMware’s Solution Engineering and Technology team specializing in Cloud Management solutions.

John is a veteran IT professional with over 22 years of experience, most of that having been on the customer side running data center operations, data storage, virtual infrastructure and Unix environments for a major financial institution.

He normally blogs at storagegumbo.com and in his spare time he enjoys astronomy and astrophotography.

The post Getting started with PowerCLI for vRealize Operations (vR OPs) appeared first on VMware PowerCLI Blog.

Der korrekte Umgang mit Veränderung: Warum der Software-Defined Ansatz essentiell für die Sicherheit moderner Unternehmen ist

IT Entscheider stehen unter „Push & Pull Druck”. Weil Technologie zum Schlüsselelement übergreifender Business Strategien geworden ist, müssen entsprechende Erwartungen erfüllt werden – schneller, anpassungsfähiger, agiler und gleichzeitig sicherer, belastbarer und kosteneffektiv.

Anforderungen, die gleichzeitig die Gefahr durch Cyber-Attacken hervorheben.

„Nur Sicherheitslücken wachsen schneller, als die Budgets für mehr Cyber-Security”, betonte Pat Gelsinger auf der VMworld Keynote vom 28. August. Auch NSA Deputy Director Richard H. Ledgett Jr. sieht die Gefahr und ergänzt: Ob Unternehmen, oder einzelne Personen, “wer sich im Internet befindet, muss automatisch mit Cyber-Angriffen von entschlossenen Hackern rechnen.”

Darüber hinaus kann ein schwerwiegender Sicherheitsverstoß in Zeiten von kompletter Vernetzung, Cloud-Daten und Social Media ungeahnte Folgen nicht nur für ein Unternehmen, sondern vor allem auch für folgende Stakeholder haben:

1.Der Vorstand

Niemand ist vor einem Cyber-Angriff sicher, unabhängig davon, wie hoch eine Person in der Firmen-Hierachie steht. Ein Blick auf die gefallenen Aktienkurse des englischen Telekommunikation-Anbieters TalkTalk nach dem großen Datenleck im November 2015 zeigt, wie gravierend ein Cyber-Angriff sein kann: Chief Executive Dido Harding positionierte damals die Unternehmens-Antwort in den Medien und stellte sich dem Kreuzfeuer der Öffentlichkeit. Jack Dromey, Schattenminister der Labour Party forderte anschließend ihren Rücktritt. Doch es hätte jedes Mitglied des Managements treffen können, denn ein Hack solchen Ausmaßes hat stets Anschuldigungen von Fahrlässigkeit und tiefgreifende Untersuchungen zur Folge.

2.Mitarbeiter

Ob Wegfall von Boni, Gehaltsverringerung oder sogar Entlassung – fallende Aktienkurse nach einer schwerwiegenden Sicherheitslücke betreffen ein gesamtes Unternehmen. Ein Hack kann Mitarbeiter jedoch noch auf ganz andere Weise betreffen: Der große Sony Pictures Hack, der tausende Mitarbeiterdaten offenlegte, kostete das Unternehmen rund acht Millionen US-Dollar für laufende Verfahren mit ehemaligen Mitarbeitern. Darauf folgend zahlte Sony Pictures rund zwei Millionen US-Dollar als Entschädigung an Mitarbeiter, denen durch Schutzmaßnahmen ihrer Identität Kosten entstanden waren. Weitere 2,5 Millionen US-Dollar wurden gezahlt, um tatsächliche Schäden zu auszugleichen – rund 10,000 US-Dollar pro Mitarbeiter.

3.Kunden & Partner

Ein kompromittiertes Sicherheitssystem führt zu einem gefährdeten Datennetzwerk und betrifft neben dem entsprechenden Unternehmen noch weit mehr Beteiligte. Der Datenmenge geschuldet, die Unternehmen mittlerweile anhäufen, sind zusätzlich auch Kunden und Partner verletzbar. Das kann ernstzunehmende Konsequenzen nach sich ziehen, etwa öffentliche Demütigung, wenn diskrete, medizinische Daten veröffentlicht, oder Identitätsklau, wenn finanzielle Informationen gestohlen werden – für Kunden ist Datensicherheit nicht selten eine lebenswichtige Angelegenheit. Unternehmen tragen hierbei die Verantwortung, freiwillig zur Verfügung gestellte Daten so sicher wie möglich zu verwalten.

Wie also lässt sich sicherstellen, dass solche und andere Situationen nicht eintreffen? Wie können IT-Verantwortliche dafür sorgen, dass Unternehmen zuverlässig, sicher und stabil bleiben, während sie gleichzeitig agiler, effektiver und schneller handeln müssen, um konkurrenzfähig zu bleiben?

Veränderung akzeptieren

IT-Verantwortliche haben lange Zeit versucht Veränderungen zu kontrollieren, um sie im Griff zu behalten. Kein erfolgreiches Unternehmen steht jedoch still – Expansion, neue Mitarbeiter, innovative Produkte. Es gibt keine Garantie, dass Unternehmen morgen noch dieselben Strukturen haben werden, wir gestern. Ein Fakt, der traditionelle Sicherheitsmaßnahmen schnell veralten lässt. Veränderung darf nicht länger restriktiv behandelt werden, um Sicherheit zu steigern. Sicherheit muss genau so dynamisch und agil sein, wie der Rest des Unternehmens, damit Veränderung entstehen kann.

Ein Software definierter Ansatz ist dabei die einzige Lösungsmöglichkeit. Cyber-Security kann so in alle Firmenstrukturen verwoben, statt nur um diese herumgewickelt werden. Oder um es anders auszudrücken: Der traditionelle Ansatz ein Fahrrad zu schützen, lag darin, es mit einem Schloss anzuketten. Wird dieses von einem Dieb geknackt, ermöglicht der Software definierte Ansatz nun, die Reifen in Rechtecke zu verwandeln – wegfahren unmöglich.

Grundsätzlich besteht die Aufgabe eines Software Defined Ansatzes also darin, Veränderung zu ermöglichen. Sie bereitet ein Unternehmen auf Veränderung vor und ermöglicht agile Reaktionen. Cyber-Kriminalität steht nicht still und verändert sich fortlaufend. Viele Unternehmen mit traditionellen Sicherheitssystemen sind nicht länger dazu in der Lage sich rechtzeitig anzupassen. Ein Problem, dass wahrgenommen werden muss – nicht nur, um die eigene Haut zu retten, sondern auch die von Kunden und Partnern.

Folgen Sie @VMware_DE oder unserem Blog für weitere Informationen zum Software definierten Ansatz und IT-Sicherheit.