10 außergewöhnliche Unterkünfte in Tirol von denen man vielleicht noch nicht gehört hat

Ich möchte  eine etwas andere Art der Empfehlungsliste von  Ferienwohnungen und Ferienhäuser und einem Hotel versuchen. Tolle Gastgeber, von denen man vielleicht noch nicht auf „Best Of“ Listen gelesen hat. In jedem Fall überzeugen diese Häuser mit ihrer Lage und Ausstattung  und alle zeichnet eine auffallend hohe Kundenzufriedenheit auf Bewertungsportalen nahe der „Perfektionsgrenze“ aus. Deshalb verdienen sie sich einen Platz unter den Top 10 der „außergewöhnlichsten Unterkünfte“. Und das Angenehmste: die meisten Unterkünfte sind noch dazu überraschend preiswert.

OpenSSH Removes SSHv1 Support

In a series of commits starting here and ending with this one, Damien Miller completed the removal of all support for the now-historic SSHv1 protocol from OpenSSH.

The final commit message, for the commit that removes the SSHv1 related regression tests, reads:

Eliminate explicit specification of protocol in tests and loops over protocol. We only support SSHv2 now.

Read more...

pfSense 2.5 and AES-NI

We’re starting the process toward pfSense software release 2.3.4. pfSense software release 2.4 is close as well, and will bring a number of improvements: UEFI, translations to at least five lanuguages, ZFS, FreeBSD 11 base, new login page, OpenVPN 2.4 and more. pfSense version 2.4 requires a 64-bit Intel or AMD CPU, and nanobsd images are no longer a part of pfSense as of version 2.4.

Getting started with VIC v1.1

VMware recently release vSphere Integrated Containers v1.1. I got an opportunity recently to give it a whirl. While I’ve done quite a bit of work with VIC in the past, a number of things have changed, especially in the command line. What I’ve decided to do in the post is highlight some of the new command line options that are necessary to deploy the VCH, the Virtual Container Host. Once the VCH is deployed, at that point you have the docker API endpoint to start deploying your “containers as VMs”. Before diving into that however, I do want to clarify one point that comes up quite a bit. VIC v1.1 is not using VM fork/instant clone. There are still some limitations to using instant clone, and the VIC team decided not to pursue this option just yet, as they wished to leverage the full set of vSphere core features. Thanks Massimo for the clarification. Now onto deploying my VCH with VIC v1.1.

First things first – VIC now comes as an OVA. Roll it out like any other OVA. Once deployed, you can point a web browser at the OVA and pull down the vic-machine components directly to deploy the VCH(s).

I have gone with deploying the VCH from a Windows environment using vic-machine. If you want to see the steps involved in getting a Windows environment ready for VIC, check out this post here from Cody over at the humble lab. Here is the help output to get us started.

C:\Users\chogan\Downloads\vic>vic-machine-windows.exe -h

NAME:
   vic-machine-windows.exe - Create and manage Virtual Container Hosts

USAGE:
   vic-machine-windows.exe [global options] command [command options] [arguments...]

VERSION:
   v1.1.0-9852-e974a51

COMMANDS:
     create   Deploy VCH
     delete   Delete VCH and associated resources
     ls       List VCHs
     inspect  Inspect VCH
     upgrade  Upgrade VCH to latest version
     version  Show VIC version information
     debug    Debug VCH
     update   Modify configuration
     help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --help, -h     show help
   --version, -v  print the version

C:\Users\chogan\Downloads\vic>

Lets see if I can at least validate against my vSphere environment by trying to list any existing VCHs:

C:\Users\chogan\Downloads\vic>vic-machine-windows.exe ls --target vcsa-06.rainpole.com \
--user administrator@vsphere.local --password xxx

Apr 28 2017 12:38:04.402+01:00 INFO  ### Listing VCHs ####
Apr 28 2017 12:38:04.491+01:00 ERROR Failed to verify certificate for target=vcsa-06.rainpole.com \
(thumbprint=4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00)
Apr 28 2017 12:38:04.494+01:00 ERROR List cannot continue - failed to create validator: x509: \
certificate signed by unknown authority
Apr 28 2017 12:38:04.495+01:00 ERROR --------------------
Apr 28 2017 12:38:04.496+01:00 ERROR vic-machine-windows.exe ls failed: list failed

Well, that did not work. I need to include the thumbprint of the vCenter server in the command:

C:\Users\chogan\Downloads\vic>vic-machine-windows.exe ls --target vcsa-06.rainpole.com \
--user administrator@vsphere.local --password xxx --thumbprint \
4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00

Apr 28 2017 12:39:37.898+01:00 INFO  ### Listing VCHs ####
Apr 28 2017 12:39:38.109+01:00 INFO  Validating target

ID        PATH        NAME        VERSION        UPGRADE STATUS
Now the command is working, but I don’t have any existing VCHs. Let’s create one. There are a lot of options included in this command since we are providing not only VCH details, but also network details for the “containers as VMs” that we will deploy later on:
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --target vcsa-06.rainpole.com \
--user administrator@vsphere.local --password xxxx --name corVCH01 \
--public-network "VM Network" --bridge-network BridgeDPG --bridge-network-range "192.168.100/16" \
--dns-server 10.27.51.252 --tls-cname=*.rainpole.com --no-tlsverify --compute-resource Cluster \
--thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00
Apr 28 2017 12:59:31.479+01:00 INFO  ### Installing VCH ####
Apr 28 2017 12:59:31.481+01:00 WARN  Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
Apr 28 2017 12:59:31.483+01:00 ERROR Common Name must be provided when generating certificates for client authentication:
Apr 28 2017 12:59:31.485+01:00 INFO    --tls-cname=<FQDN or static IP> # for the appliance VM
Apr 28 2017 12:59:31.487+01:00 INFO    --tls-cname=<*.yourdomain.com>  # if DNS has entries in that form for DHCP addresses (less secure)
Apr 28 2017 12:59:31.492+01:00 INFO    --no-tlsverify                  # disables client authentication (anyone can connect to the VCH)
Apr 28 2017 12:59:31.493+01:00 INFO    --no-tls                        # disables TLS entirely
Apr 28 2017 12:59:31.494+01:00 INFO
Apr 28 2017 12:59:31.496+01:00 ERROR Create cannot continue: unable to generate certificates
Apr 28 2017 12:59:31.498+01:00 ERROR --------------------
Apr 28 2017 12:59:31.499+01:00 ERROR vic-machine-windows.exe create failed: provide Common Name for server certificate
Unfortunately, it seems it doesn’t like the TLS part of the command. It appears that this is a known issue. It seems that the TLS part of the command should be one of the first options specified in the command line. Let’s move it before some of the other arguments in the command:
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --target vcsa-06.rainpole.com \
--user "administrator@vsphere.local" --password "xxx" --no-tlsverify --name corVCH01 \
--public-network "VM Network" --bridge-network BridgeDPG --bridge-network-range "192.168.100.0/16" \
--dns-server 10.27.51.252 --compute-resource Cluster \
--thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00
Apr 28 2017 13:05:45.623+01:00 INFO  ### Installing VCH ####
Apr 28 2017 13:05:45.625+01:00 WARN  Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
Apr 28 2017 13:05:45.627+01:00 INFO  Generating self-signed certificate/key pair - private key in corVCH01\server-key.pem
Apr 28 2017 13:05:46.162+01:00 WARN  Configuring without TLS verify - certificate-based authentication disabled
Apr 28 2017 13:05:46.336+01:00 INFO  Validating supplied configuration
Apr 28 2017 13:05:46.432+01:00 INFO  Suggesting valid values for --image-store based on "*"
Apr 28 2017 13:05:46.438+01:00 INFO  Suggested values for --image-store:
Apr 28 2017 13:05:46.439+01:00 INFO    "vsanDatastore (1)"
Apr 28 2017 13:05:46.441+01:00 INFO    "isilion-nfs-01"
Apr 28 2017 13:05:46.463+01:00 INFO  vDS configuration OK on "BridgeDPG"
Apr 28 2017 13:05:46.464+01:00 ERROR Firewall check SKIPPED
Apr 28 2017 13:05:46.466+01:00 ERROR   datastore not set
Apr 28 2017 13:05:46.467+01:00 ERROR License check SKIPPED
Apr 28 2017 13:05:46.468+01:00 ERROR   datastore not set
Apr 28 2017 13:05:46.469+01:00 ERROR DRS check SKIPPED
Apr 28 2017 13:05:46.471+01:00 ERROR   datastore not set
Apr 28 2017 13:05:46.472+01:00 ERROR Compatibility check SKIPPED
Apr 28 2017 13:05:46.473+01:00 ERROR   datastore not set
Apr 28 2017 13:05:46.475+01:00 ERROR --------------------
Apr 28 2017 13:05:46.476+01:00 ERROR datastore empty
Apr 28 2017 13:05:46.477+01:00 ERROR Specified bridge network range is not large enough for the default bridge network size. --bridge-network-range must be /16 or larger network.
Apr 28 2017 13:05:46.479+01:00 ERROR Firewall check SKIPPED
Apr 28 2017 13:05:46.480+01:00 ERROR License check SKIPPED
Apr 28 2017 13:05:46.482+01:00 ERROR DRS check SKIPPED
Apr 28 2017 13:05:46.484+01:00 ERROR Compatibility check SKIPPED
Apr 28 2017 13:05:46.488+01:00 ERROR Create cannot continue: configuration validation failed
Apr 28 2017 13:05:46.490+01:00 ERROR --------------------
Apr 28 2017 13:05:46.491+01:00 ERROR vic-machine-windows.exe create failed: validation of configuration failed
The TLS issue now seems to be addressed, but it appears I omitted a required field, –image-store. This is where the container images will be stored, and it should be set to one of the available datastores in the vSphere environment. The output is even providing some recommended options, either vSAN or an NFS datastore. These are available to all hosts in the cluster.
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --target vcsa-06.rainpole.com \
--user "administrator@vsphere.local" --password "xxx" --no-tlsverify --name corVCH01 \
--image-store isilion-nfs-01 --public-network "VM Network" --bridge-network BridgeDPG \
--bridge-network-range "192.168.100.0/16" --dns-server 10.27.51.252 --compute-resource Cluster \
--thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00
Apr 28 2017 13:09:17.732+01:00 INFO  ### Installing VCH ####
Apr 28 2017 13:09:17.736+01:00 WARN  Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
Apr 28 2017 13:09:17.739+01:00 INFO  Loaded server certificate corVCH01\server-cert.pem
Apr 28 2017 13:09:17.741+01:00 WARN  Configuring without TLS verify - certificate-based authentication disabled
Apr 28 2017 13:09:17.914+01:00 INFO  Validating supplied configuration
Apr 28 2017 13:09:18.027+01:00 INFO  vDS configuration OK on "BridgeDPG"
Apr 28 2017 13:09:18.053+01:00 INFO  Firewall status: DISABLED on "/DC/host/Cluster/esxi-dell-i.rainpole.com"
Apr 28 2017 13:09:18.078+01:00 INFO  Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-j.rainpole.com"
Apr 28 2017 13:09:18.101+01:00 INFO  Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-k.rainpole.com"
Apr 28 2017 13:09:18.130+01:00 INFO  Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-l.rainpole.com"
Apr 28 2017 13:09:18.142+01:00 INFO  Firewall configuration OK on hosts:
Apr 28 2017 13:09:18.144+01:00 INFO     "/DC/host/Cluster/esxi-dell-i.rainpole.com"
Apr 28 2017 13:09:18.145+01:00 INFO     "/DC/host/Cluster/esxi-dell-j.rainpole.com"
Apr 28 2017 13:09:18.147+01:00 INFO     "/DC/host/Cluster/esxi-dell-k.rainpole.com"
Apr 28 2017 13:09:18.149+01:00 INFO     "/DC/host/Cluster/esxi-dell-l.rainpole.com"
Apr 28 2017 13:09:18.188+01:00 INFO  License check OK on hosts:
Apr 28 2017 13:09:18.190+01:00 INFO    "/DC/host/Cluster/esxi-dell-i.rainpole.com"
Apr 28 2017 13:09:18.191+01:00 INFO    "/DC/host/Cluster/esxi-dell-j.rainpole.com"
Apr 28 2017 13:09:18.192+01:00 INFO    "/DC/host/Cluster/esxi-dell-k.rainpole.com"
Apr 28 2017 13:09:18.194+01:00 INFO    "/DC/host/Cluster/esxi-dell-l.rainpole.com"
Apr 28 2017 13:09:18.205+01:00 INFO  DRS check OK on:
Apr 28 2017 13:09:18.206+01:00 INFO    "/DC/host/Cluster"
Apr 28 2017 13:09:18.234+01:00 INFO
Apr 28 2017 13:09:18.346+01:00 INFO  Creating virtual app "corVCH01"
Apr 28 2017 13:09:18.369+01:00 INFO  Creating appliance on target
Apr 28 2017 13:09:18.374+01:00 INFO  Network role "client" is sharing NIC with "public"
Apr 28 2017 13:09:18.375+01:00 INFO  Network role "management" is sharing NIC with "public"
Apr 28 2017 13:09:19.301+01:00 INFO  Uploading images for container
Apr 28 2017 13:09:19.307+01:00 INFO     "bootstrap.iso"
Apr 28 2017 13:09:19.309+01:00 INFO     "appliance.iso"
Apr 28 2017 13:09:25.346+01:00 INFO  Waiting for IP information
Apr 28 2017 13:09:42.869+01:00 INFO  Waiting for major appliance components to launch
Apr 28 2017 13:09:42.918+01:00 INFO  Obtained IP address for client interface: "10.27.51.38"
Apr 28 2017 13:09:42.921+01:00 INFO  Checking VCH connectivity with vSphere target
Apr 28 2017 13:10:42.946+01:00 WARN  Could not run VCH vSphere API target check due to ServerFaultCode: A general system error occurred: vix error codes = (3016, 0).
but the VCH may still function normally
Apr 28 2017 13:12:25.346+01:00 ERROR Connection failed with error: i/o timeout
Apr 28 2017 13:12:25.346+01:00 INFO  Docker API endpoint check failed: failed to connect to https://10.27.51.38:2376/info: i/o timeout
Apr 28 2017 13:12:25.347+01:00 INFO  Collecting e1ea92eb-ac80-4b33-88cc-831b35fd8bab vpxd.log
Apr 28 2017 13:12:25.418+01:00 INFO     API may be slow to start - try to connect to API after a few minutes:
Apr 28 2017 13:12:25.428+01:00 INFO             Run command: docker -H 10.27.51.38:2376 --tls info
Apr 28 2017 13:12:25.429+01:00 INFO             If command succeeds, VCH is started. If command fails, VCH failed to install - see documentation for troubleshooting.
Apr 28 2017 13:12:25.431+01:00 ERROR --------------------
Apr 28 2017 13:12:25.431+01:00 ERROR vic-machine-windows.exe create failed: Creating VCH exceeded time limit of 3m0s. Please increase the timeout using --timeout to accommodate for a busy vSphere target
I traced this to an issue with DNS. It seems this issue can arise if the VCH cannot resolve some of the vSphere entities (vCenter Server, ESXi). Since I was using DHCP for my VCH, I did not need to specify an IP address, subnet mask or DNS server. However this command includes a DNS server entry. So I simply removed the DNS reference, and ran the command without it (I also include an option to store any volumes created in a particular location highlighted with –volume-store):
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --name corVCH01 --compute-resource Cluster \
--target vcsa-06.rainpole.com --user administrator@vsphere.local --password xxx \
--thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00 --no-tlsverify \
--image-store isilion-nfs-01 --public-network "VM Network" --bridge-network BridgeDPG \
--bridge-network-range "192.168.100.0/16" --volume-store "isilion-nfs-01/VIC:corvols"

Apr 28 2017 13:46:40.671+01:00 INFO  ### Installing VCH ####
Apr 28 2017 13:46:40.672+01:00 WARN  Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
Apr 28 2017 13:46:40.697+01:00 INFO  Loaded server certificate corVCH01\server-cert.pem
Apr 28 2017 13:46:40.699+01:00 WARN  Configuring without TLS verify - certificate-based authentication disabled
Apr 28 2017 13:46:40.873+01:00 INFO  Validating supplied configuration
Apr 28 2017 13:46:40.991+01:00 INFO  vDS configuration OK on "BridgeDPG"
Apr 28 2017 13:46:41.018+01:00 INFO  Firewall status: DISABLED on "/DC/host/Cluster/esxi-dell-i.rainpole.com"
Apr 28 2017 13:46:41.044+01:00 INFO  Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-j.rainpole.com"
Apr 28 2017 13:46:41.071+01:00 INFO  Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-k.rainpole.com"
Apr 28 2017 13:46:41.097+01:00 INFO  Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-l.rainpole.com"
Apr 28 2017 13:46:41.109+01:00 INFO  Firewall configuration OK on hosts:
Apr 28 2017 13:46:41.111+01:00 INFO     "/DC/host/Cluster/esxi-dell-i.rainpole.com"
Apr 28 2017 13:46:41.112+01:00 INFO     "/DC/host/Cluster/esxi-dell-j.rainpole.com"
Apr 28 2017 13:46:41.113+01:00 INFO     "/DC/host/Cluster/esxi-dell-k.rainpole.com"
Apr 28 2017 13:46:41.115+01:00 INFO     "/DC/host/Cluster/esxi-dell-l.rainpole.com"
Apr 28 2017 13:46:41.331+01:00 INFO  License check OK on hosts:
Apr 28 2017 13:46:41.333+01:00 INFO    "/DC/host/Cluster/esxi-dell-i.rainpole.com"
Apr 28 2017 13:46:41.334+01:00 INFO    "/DC/host/Cluster/esxi-dell-j.rainpole.com"
Apr 28 2017 13:46:41.335+01:00 INFO    "/DC/host/Cluster/esxi-dell-k.rainpole.com"
Apr 28 2017 13:46:41.337+01:00 INFO    "/DC/host/Cluster/esxi-dell-l.rainpole.com"
Apr 28 2017 13:46:41.347+01:00 INFO  DRS check OK on:
Apr 28 2017 13:46:41.350+01:00 INFO    "/DC/host/Cluster"
Apr 28 2017 13:46:41.384+01:00 INFO
Apr 28 2017 13:46:41.493+01:00 INFO  Creating virtual app "corVCH01"
Apr 28 2017 13:46:41.521+01:00 INFO  Creating directory [isilion-nfs-01] VIC
Apr 28 2017 13:46:41.527+01:00 INFO  Datastore path is [isilion-nfs-01] VIC
Apr 28 2017 13:46:41.528+01:00 INFO  Creating appliance on target
Apr 28 2017 13:46:41.533+01:00 INFO  Network role "client" is sharing NIC with "public"
Apr 28 2017 13:46:41.537+01:00 INFO  Network role "management" is sharing NIC with "public"
Apr 28 2017 13:46:42.515+01:00 INFO  Uploading images for container
Apr 28 2017 13:46:42.517+01:00 INFO     "bootstrap.iso"
Apr 28 2017 13:46:42.518+01:00 INFO     "appliance.iso"
Apr 28 2017 13:46:48.425+01:00 INFO  Waiting for IP information
Apr 28 2017 13:47:03.785+01:00 INFO  Waiting for major appliance components to launch
Apr 28 2017 13:47:03.860+01:00 INFO  Obtained IP address for client interface: "10.27.51.41"
Apr 28 2017 13:47:03.862+01:00 INFO  Checking VCH connectivity with vSphere target
Apr 28 2017 13:47:03.935+01:00 INFO  vSphere API Test: https://vcsa-06.rainpole.com vSphere API target responds as expected
Apr 28 2017 13:47:08.483+01:00 INFO  Initialization of appliance successful
Apr 28 2017 13:47:08.484+01:00 INFO
Apr 28 2017 13:47:08.485+01:00 INFO  VCH Admin Portal:
Apr 28 2017 13:47:08.486+01:00 INFO  https://10.27.51.41:2378
Apr 28 2017 13:47:08.487+01:00 INFO
Apr 28 2017 13:47:08.489+01:00 INFO  Published ports can be reached at:
Apr 28 2017 13:47:08.490+01:00 INFO  10.27.51.41
Apr 28 2017 13:47:08.491+01:00 INFO
Apr 28 2017 13:47:08.492+01:00 INFO  Docker environment variables:
Apr 28 2017 13:47:08.493+01:00 INFO  DOCKER_HOST=10.27.51.41:2376
Apr 28 2017 13:47:08.499+01:00 INFO
Apr 28 2017 13:47:08.500+01:00 INFO  Environment saved in corVCH01/corVCH01.env
Apr 28 2017 13:47:08.502+01:00 INFO
Apr 28 2017 13:47:08.503+01:00 INFO  Connect to docker:
Apr 28 2017 13:47:08.504+01:00 INFO  docker -H 10.27.51.41:2376 --tls info
Apr 28 2017 13:47:08.506+01:00 INFO  Installer completed successfully
Success! I now have my docker endpoint, and I can provide this to my developers for the creations of “containers in VMs”. Let’s see if it works with a quick check/test:
C:\Users\chogan\Downloads\vic>docker -H 10.27.51.41:2376 --tls info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: v1.1.0-9852-e974a51
Storage Driver: vSphere Integrated Containers v1.1.0-9852-e974a51 Backend Engine
VolumeStores: corvols
vSphere Integrated Containers v1.1.0-9852-e974a51 Backend Engine: RUNNING
VCH CPU limit: 155936 MHz
VCH memory limit: 423.9 GiB
VCH CPU usage: 0 MHz
VCH memory usage: 5.028 GiB
VMware Product: VMware vCenter Server
VMware OS: linux-x64
VMware OS version: 6.5.0
Plugins:
Volume: vsphere
Network: bridge
Swarm: inactive
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 155936
Total Memory: 423.9GiB
ID: vSphere Integrated Containers
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Registry: registry-1.docker.io
Experimental: false
Live Restore Enabled: false

C:\Users\chogan\Downloads\vic>
 That all seems good. Let’s run my first container:
C:\Users\chogan\Downloads\vic>docker -H 10.27.51.41:2376 --tls run -it busybox
Unable to find image 'busybox:latest' locally
Pulling from library/busybox
7520415ce762: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:32f093055929dbc23dec4d03e09dfe971f5973a9ca5cf059cbfb644c206aa83f
Status: Downloaded newer image for library/busybox:latest
/ #
 Excellent. Now a few other things to point out with VIC 1.1. You might remember features like Admiral and Harbor which I discussed in the past. These are now completely embedded. Simply point your browser at the IP Address:8282 of the VIC OVA that you previously deployed, and you will get Admiral. This can be used for the orchestrated deployment of “Container as VM” templates. These templates can be retrieved from either docker hub or your own local registry for VIC, i.e. Harbor.
And to access harbor, simply click on the “Registry” field at the top of the navigation screen:
You can look back on my previous post on how to use admiral and harbor for orchestrated deployment and registry respectively. Let’s finish this post with one last command, which is the command I started with to list VCHs.
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe ls --target vcsa-06.rainpole.com \
--user administrator@vsphere.local --password VMware123! \
--thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00
May  2 2017 11:13:09.002+01:00 INFO  ### Listing VCHs ####
May  2 2017 11:13:09.178+01:00 INFO  Validating target

ID           PATH                              NAME            VERSION                    UPGRADE STATUS
vm-36        /DC/host/Cluster/Resources        corVCH01        v1.1.0-9852-e974a51        Up to date

C:\Users\chogan\Downloads\vic>
Now my VCH is listed.
Again, I’m only touching the surface on what VIC can do for you. If you want to give your developers the ability to use containers, but wish to maintain visibility into container resources, networking, storage, CPU, memory, etc, then maybe VIC is what you need. I’ll try to some more work with VIC 1.1 over the coming weeks. Hopefully this is enough to get you started.

The post Getting started with VIC v1.1 appeared first on CormacHogan.com.

vSAN 6.6: Manual vs Automatic Disk Claim Mode

Advertise here with BSA


I received this question on Manual vs Automatic disk claim mode in vSAN 6.6. Someone upgraded a cluster from 6.2 to 6.6 and  wanted to add a second cluster. They noticed that during the creation of the new cluster there was no option to select “automatic vs manual”.

I think a blog post will be published that explains the reasoning behind this, I figured I would share some of it before hand so you don’t end up looking for something that does not exist. In vSAN 6.6 the “Automatic” option which automatically creates disk groups for you has disappeared. The reason for this is because we see the world moving to all-flash rather fast. With All-Flash it is difficult to differentiate between the capacity and cache device. For that reason, in previous versions of vSphere/vSAN you already had to grab the devices yourself when it was an all-flash configuration. With 6.6 we removed the “automatic” option as we also recognized that when there are multiple disk groups and you need to take disk controllers, and location of disks etc in to account it even becomes more complex to automatically form disk groups. In our experience most customers preferred to maintain control and had this configured to “manual” by default. As such this option was removed.

I hope that clarifies things. I will add a link to the article explaining it.

"vSAN 6.6: Manual vs Automatic Disk Claim Mode" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

vSAN 6.6: Manual vs Automatic Disk Claim Mode

Advertise here with BSA


I received this question on Manual vs Automatic disk claim mode in vSAN 6.6. Someone upgraded a cluster from 6.2 to 6.6 and  wanted to add a second cluster. They noticed that during the creation of the new cluster there was no option to select “automatic vs manual”.

I think a blog post will be published that explains the reasoning behind this, I figured I would share some of it before hand so you don’t end up looking for something that does not exist. In vSAN 6.6 the “Automatic” option which automatically creates disk groups for you has disappeared. The reason for this is because we see the world moving to all-flash rather fast. With All-Flash it is difficult to differentiate between the capacity and cache device. For that reason, in previous versions of vSphere/vSAN you already had to grab the devices yourself when it was an all-flash configuration. With 6.6 we removed the “automatic” option as we also recognized that when there are multiple disk groups and you need to take disk controllers, and location of disks etc in to account it even becomes more complex to automatically form disk groups. In our experience most customers preferred to maintain control and had this configured to “manual” by default. As such this option was removed.

I hope that clarifies things. I will add a link to the article explaining it.

"vSAN 6.6: Manual vs Automatic Disk Claim Mode" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Kubernetes Anywhere and PhotonOS Template

Experimenting with Kubernetes to orchestrate and manage containers? If you are like me and already have a lot invested in vSphere (time, infra, knowledge) you might be exctied to use Kubernetes Anywhere to deploy it quickly. I won’t re-write the instruction found here:

https://github.com/kubernetes/kubernetes-anywhere

It works with

  • Google Compure Engine
  • Azure
  • vSphere

The vSphere option uses the Photon OS ova to spin up the container hosts and managers. So you can try it out easily with very little background in containers. That is dangerous as you will find yourself neck deep in new things to learn.

Don’t turn on the template!

media_1491484535602.png

If you are like me and *skim* instructions you could be in for hours of “Why do all my nodes have the same IP?” When you power on the Photon OS template the startup sequence generates a machine ID (and mac address). So even though I powered it back off, the cloning processes was producing identical VM’s for my kubernetes cluster. Those not hip to networking this is bad for communication.

Also, don’t try to be a good VMware Admin cad convert that VM to a VM Template. The Kubernetes Anywhere script won’t find it.

IF you do like me and skip a few lines reading (happens right) make sure to check this documenation out on Photon OS. It will help get you on the right track.

https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md#clearing-the-machine-id-of-a-cloned-instance-for-dhcp

This is clearly marked in the documentation now.