Red Hat OpenShift Virtualization in nested VMware vSphere Cluster

In this post, I'll go thru the process of running Virtual Machines on OpenShift Virtualization in a nested setup inside VMware vSphere. This requires both ESXi hosts and a VCenter, both on 6.7U3 or up.

Nested virtualization is a configuration where the Virtual Machine running the OpenShift Node on ESXi exposes the bare metal capabilities so new Virtual Machines can be created inside it. This could be used as a showcase for the technology, lab environment or POCs. OpenShift VMs are based on KVM, the virtualization layer from the Linux Kernel and are implemented based on the KubeVirt project.

Disclaimer: Nested virtualization is not officially supported on any platform. Officially OpenShift Virtualization is meant to run on Bare Metal nodes. Please do not use it in Production.

First create an OpenShift cluster using VMware IPI (Installer Provisioned Infrastructure) to leverage the easy and quick deploy. Going thru this is a little outside the scope of the post but the documentation details the process. At the end, the installer will provision and configure all virtual machines on your VCenter and the cluster will be answering in both API and Console.

Now, enable nested virtualization in the OpenShift template VM. Go to your vCenter or ESXi console and edit the template VM, usually named clustername-clusterid-rhcos, in my case it’s “ocp-9s46k-rhcos”.

You need to expand the CPU section and check “Expose hardware assisted virtualization to the guest OS”:

Just click OK and all future nodes will have nested VM capability.

Also you need to make sure your DSwitch Portgroup or VSwitch has “Promiscuous Mode” and “Forged transmits” enabled in the Security tab. This is required since the node will have internal VMs with their own MAC addresses. If you want to go deeper, check this article from virtuallyGhetto explaining an option to avoid having Promiscuous Mode enabled.

Finally create additional worker nodes so they can host the created VMs. Since the template has nested virtualization enabled, OpenShift will see it has KVM capabilities and the nodes will host the created VMs.

It’s better to create a new MachineSet so it uses specific sized nodes and also have labels you can use to place the VMs in the future. Create the YAML below adjusting the cluster name and ID accordingly. Also size the nodes CPU and memory as you need:

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ocp-9s46k
name: ocp-9s46k-virt
namespace: openshift-machine-api
spec:
replicas: 2
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: ocp-9s46k
machine.openshift.io/cluster-api-machineset: ocp-9s46k-virt
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ocp-9s46k
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: ocp-9s46k-virt
spec:
metadata:
labels:
node-role.kubernetes.io/virt: ""
providerSpec:
value:
apiVersion: vsphereprovider.openshift.io/v1beta1
credentialsSecret:
name: vsphere-cloud-credentials
diskGiB: 60
kind: VSphereMachineProviderSpec
memoryMiB: 8192
metadata:
creationTimestamp: null
network:
devices:
- networkName: DPortGroup
numCPUs: 4
numCoresPerSocket: 2
snapshot: ""
template: ocp-9s46k-rhcos
userDataSecret:
name: worker-user-data
workspace:
datacenter: Datacenter
datastore: VMs-Datastore-01
folder: /Datacenter/vm/ocp-9s46k
resourcePool: /Datacenter/host/Cluster/Resources
server: vcenter.domain.com
$ oc apply -f machineset-virt.yaml

This will create 2 nodes with 4vCPU, 8GB RAM and 60GB disk. Wait until they show up on oc get nodes.

Now deploy the Operator and create the virtualization cluster. First go to “Operators -> OperatorHub” menu and install the “OpenShift Virtualization” with default settings. It will create the openshift-cnv namespace.

Then Create the Hyperconverged cluster in “Installed Operators” -> “OpenShift Virtualization” -> “OpenShift Virtualization Operator Deployment” tab and click “Create HyperConverged Cluster”. Use default settings.

After a couple minutes and the operator deploys all required pods to your nodes, you should have a new entry in the console at “Workloads -> Virtualization” where you can manage the VMs.

First, set the cluster to store the VM MAC addresses so it won't allocate a random MAC every VM reboot avoiding different IPs from DHCP on reboots.

oc label namespace openshift-cnv mutatevirtualmachines.kubemacpool.io=allocate

Then configure the nodes to have a network bridge so it’s possible to connect the VMs directly into the node VLAN. Create a YAML and apply to the cluster:

---
apiVersion: nmstate.io/v1alpha1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-ens192-policy
spec:
desiredState:
interfaces:
- name: br1
description: Linux bridge with ens192 as a port
type: linux-bridge
state: up
ipv4:
dhcp: true
enabled: true
bridge:
options:
stp:
enabled: false
port:
- name: ens192
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: bridge-network
namespace: openshift-cnv
spec:
config: '{
"cniVersion": "0.3.1",
"name": "bridge-network",
"plugins": [
{
"type": "cnv-bridge",
"bridge": "br1",
"ipam": {}
},
{
"type": "cnv-tuning"
}
]
}'
$ oc apply -f network-config.yaml

OpenShift Virtualization works best with RAW images. It’s recommended that if you have a QCOW2 image, convert it to RAW first. Here we will download Cirros, a tiny test VM and CentOS:

# Cirros
wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
qemu-img convert cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.raw# CentOS 8
wget https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2
qemu-img convert CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2 CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.raw

The images are compatible with Openstack, so get more images at https://docs.openstack.org/image-guide/obtain-images.html.

Upload images to OpenShift to be used as template disks using virtctl utility from https://github.com/kubevirt/kubevirt/releases. Download and move it to your path, then use it to upload the images to the cluster:

oc project openshift-cnvvirtctl image-upload dv cirros-dv --size=500Mi --image-path=./cirros-0.5.1-x86_64-disk.raw --insecurevirtctl image-upload dv centos8-dv --size=5Gi --image-path=./CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.raw --insecureoc get dvs

Now you can create VMs on OpenShift console in Workload -> Virtualization:

Like in this example, I’m booting a VM over PXE on OpenShift Virtualization:

Or deploy using the YAML template below using one of the uploaded images:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
name.os.template.kubevirt.io/rhel6.10: Red Hat Enterprise Linux 6.0 or higher
name: cirros01
namespace: openshift-cnv
labels:
app: cirros01
flavor.template.kubevirt.io/tiny: 'true'
os.template.kubevirt.io/rhel6.10: 'true'
vm.kubevirt.io/template: rhel6-server-tiny-v0.7.0
vm.kubevirt.io/template.namespace: openshift
vm.kubevirt.io/template.revision: '1'
vm.kubevirt.io/template.version: v0.11.2
workload.template.kubevirt.io/server: 'true'
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
creationTimestamp: null
name: cirros01-disk-0
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
volumeMode: Filesystem
source:
pvc:
name: cirros-dv
namespace: openshift-cnv
template:
metadata:
creationTimestamp: null
labels:
flavor.template.kubevirt.io/tiny: 'true'
kubevirt.io/domain: cirros01
kubevirt.io/size: tiny
os.template.kubevirt.io/rhel6.10: 'true'
vm.kubevirt.io/name: cirros01
workload.template.kubevirt.io/server: 'true'
spec:
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
autoattachPodInterface: false
disks:
- bootOrder: 1
disk:
bus: virtio
name: disk-0
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- bridge: {}
model: virtio
name: nic-0
networkInterfaceMultiqueue: true
rng: {}
machine:
type: pc-q35-rhel8.2.0
resources:
requests:
memory: 1Gi
evictionStrategy: LiveMigrate
hostname: cirros01
networks:
- multus:
networkName: bridge-network
name: nic-0
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: cirros01-disk-0
name: disk-0
- cloudInitNoCloud:
userData: |
#cloud-config
name: default
hostname: cirros01
name: cloudinitdisk

This template creates a Cirros VM based on the previous uploaded image (cirros-dv) connected to the same LAN as the OpenShift Nodes so it will get an IP from your DHCP server.

You can also customize the yaml to run some cloud-init configuration like injecting SSH Keys, create users, groups and setting passwords:

#cloud-config
# Add groups to the system
groups:
- cloud-users
# Add users to the system. Users are added after groups are added.
# user: clouduser
# pwd: cloudpwd
name: default
ssh_authorized_keys:
- >-
ssh-rsa
AAAAB3NzaC1yc2....... [YOUR KEY HERE]==
me@email.com
hostname: myVM-01
users:
- default
- name: clouduser
gecos: Cloud User
primary_group: cloud-users
groups: users
lock_passwd: false
ssh_pwauth: True
passwd: $apr1$1jwRerPZ$R/1CrrXu4MvHdAy43IrtW.
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2....... [YOUR KEY HERE]== me@email.com
runcmd:
- echo VM IP: $(hostname -i) | sudo tee /etc/motd.d/vmip
# All objects are in openshift-cnv namespace
oc project openshift-cnv
# Get VirtualMachine (Object that creates running VMs)
oc get vms
# Get VirtualMachineInstance (proxy object to the running VM)
oc get vmis
# Get DataVolumes (VM disks)
oc get dvs
# Get NodeNetworkConfigurationPolicy (Node physical network interface)
oc get nncp
# Get NetworkAttachmentDefinitions (bridge between VM and Host NW)
oc get net-attach-def

This is a great way to test the functionality and even run some VMs managed by OpenShift integrated with your automation of choice. It brings VMs to the same manageability level as containers and having all under the same roof brings lots of benefits.

Please send me your feedback on Twitter @carlosedp.

Writing everything cloud and all the tech behind it. If you like my projects and would like to support me, check my Patreon on https://www.patreon.com/carlosedp

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store