OpenShift and common services

Installing Openshift 3.x on vSphere

VMware vSphere is a server virtualization software used to automate datacenter operations. In on-premises scenarios, we can provision virtual machines in vSphere and install Openshift on top of them in order to get the container orchestration platform necessary to run the cloud paks.

When using VMware vSphere, it is recommended to use the vSphere Storage Provider to allow dynamic provisioning of block storage for Cloud Paks. This should be configured in the inventory file during installation following these instructions: https://docs.openshift.com/container-platform/3.11/install_config/configuring_vsphere.html. The automated terraform-based installation will configure this automatically.

For file storage requirements, we tested GlusterFS, but NFS should also work as well. GlusterFS has the added benefit of an in-tree dynamic storage provisioner: https://docs.openshift.com/container-platform/3.11/install_config/storage_examples/gluster_dynamic_example.html#dynamic-provisioning

In disconnected installations, there may be additional steps to follow: https://docs.openshift.com/container-platform/3.11/install/disconnected_install.html

Here are the tested and recommended infrastructure components we used to evaluate Cloud Paks installed on Openshift on vSphere:

Component Tested Recommended
Load Balancer none (non-HA), or HAProxy (pseudo-HA) F5 BIGIP or other appliance
DNS /etc/hosts for internal cluster, bind9 (in VM) for wildcard domain highly-available DNS
Certificates (for console and routes) self-signed internal PKI
Block Storage vSphere volume vSphere volume
File Storage GlusterFS GlusterFS or NFS
Registry Volume type vSphere Volume, GlusterFS GlusterFS
Identity htpasswd LDAP or OIDC provider

Creating RHEL VMware Template

Before using the VMware terraform, the VSphere infrastructure must have a VM template available for the RHEL OS image. This VMware template will be used to create each of the VMs for the OpenShift cluster nodes.

Download RHEL 7.6 ISO

From your RedHat access account, download the RHEL 7.6 Boot ISO to your local machine. Refer to RedHat download documentation.

Upload ISO to VSphere

From your local machine, connect to your VMware instance to upload the ISO to a datastore. This will be used as the base of the template that gets created in the next step. Refer to VMware ISO upload documentation.

Take note of what datastore and folder was chosen during the upload as it will be needed during the last step of “Create VMware template”

Create a VMware Virtual Machine

A VMware template is created from a runnable Virtual Machine. In this section, you’ll create a new machine definition and install the basic operating system.

Create the Virtual Hardware definition

Define the virtual hardware settings for a new VM by using the guidance at VMware Docs topic Create a Virtual Machine Without a Template or Clone. Pay attention to these parameters (the author found these settings in Step 7, “Customizing Hardware”, of the ESX 6.5 “New Virtual Machine” wizard)

  • “Storage” set per your VSphere administrator.
  • The CPU count, memory and disk sizes are usually modified by the deployment automation at a later point, so the initial values aren’t critical. Reasonable initial values are 2 CPU, 8 GB memory, and 100 GB disk.
  • IMPORTANT: Select “Thin provisioning” for the hard disk definition. This selection can be set under “New Disk” > “Disk Provisioning”. Thin provisioning provides for smaller initial resource allocation, while enabling future growth.
  • “New network” - Select “Browse” to select value provided by your VSphere administrator.

Once the wizard completes, you will have a new VM definition, but the operating system still needs to be installed.

Install RHEL from the ISO image

After defining the virtual hardware specification, you need to install the guest OS.

Reference the instructions at VMware Docs topic Installing Guest Operating System. The last option on this page describes how to “Install a Guest Operating System from Media”. Use this guidance to install the OS from the ISO file that you uploaded in an earlier step. You can make the ISO image available to your VM definition by setting the CD/DVD drive to “Datastore / ISO Drive”, then selecting the bootable ISO file that you uploaded. Be sure to also select that the drive should be “Connected”.

The RHEL install process is part of the ISO image. Power on the newly created machine definition to launch the install from the .ISO image in the virtual CD-ROM/DVD drive. The following guidance may be useful when performing the install.

  • Installation destination: “Automatic partitioning” works OK.
  • Create one user (the author uses the name “admin”). Make this user an Administrator by selecting the appropriate checkbox on the “new user” creation panel in the wizard.

Select “Reboot” at the conclusion of the RHEL installation process.

Add OpenShift prerequisites to the RHEL image

The basic RHEL operating system is now installed, but the RHEL repositories that support the OpenShift installation are not. The deployment automation that will be used later, also requires a user id for a user with administrative access. This section provides guidance for setting up setting up these prerequisites, as well as other general housekeeping that’s appropriate when creating a template.

Add privileges to your ‘admin’ user

Add needed sudo privileges to your ‘admin’ user by adding one line to the “/etc/sudoers configuration file. The “visudo” utility is a vi-like editor that provides a measure of safety when editing this file. From a terminal prompt, issue:

visudo

Add the following line to the bottom of the file. If your admin user is named something other than ‘admin’, then substitute your admin user name for ‘admin’ in the line below:

admin ALL=(ALL) NOPASSWD: ALL

Save the changes, and exit.

Setup Red Hat subscription access and repos

Modify the following script for your specific credentials, then execute the script at a terminal prompt in your running VM. The yum update will ensure that RHEL has the latest updates installed. This can take a bit of time to download the latest RPM packages, so be patient.

subscription-manager register --username=$rhn_username --password=$rhn_password
subscription-manager attach --pool=$rhn_poolid
yum update -y
subscription-manager repos --disable="*"
subscription-manager repos --enable="rhel-7-server-rpms" --enable="rhel-7-server-extras-rpms" --enable="rhel-7-server-ose-3.11-rpms" --enable="rhel-7-server-ansible-2.6-rpms" --enable="rhel-7-server-optional-rpms" --enable="rhel-7-fast-datapath-rpms" --enable="rh-gluster-3-client-for-rhel-7-server-rpms"
yum install -y perl wget vim-enhanced net-tools bind-utils tmux git iptables-services bridge-utils docker etcd rpcbind ansible bash-completion dnsmasq ntp logrotate httpd-tools bind-utils firewalld libselinux-python conntrack-tools openssl iproute python-dbus PyYAML yum-utils glusterfs-fuse device-mapper-multipath nfs-utils iscsi-initiator-utils ceph-common atomic cifs-utils samba-common samba-client

package-cleanup --oldkernels --count=1
yum clean all

subscription-manager remove --all
subscription-manager unregister

General housekeeping

The following script will do cleanup to make a smaller template and decrease VM deployment time.

Execute the script from a terminal prompt in your running VM:

/sbin/service auditd stop
/sbin/service rsyslog stop

/usr/sbin/logrotate -f /etc/logrotate.conf
/bin/rm -f /var/log/*-???????? /var/log/*.gz
/bin/rm -f /var/log/dmesg.old
/bin/rm -rf /var/log/anaconda
/bin/cat /dev/null > /var/log/audit/audit.log
/bin/cat /dev/null > /var/log/wtmp
/bin/cat /dev/null > /var/log/lastlog
/bin/cat /dev/null > /var/log/grubby
/bin/rm -f /etc/udev/rules.d/70*
/bin/sed -i '/UUID/d' /etc/sysconfig/network-scripts/ifcfg-e*
/bin/sed -i '/HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-e*
/bin/rm -rf /tmp/*
/bin/rm -rf /var/tmp/*
/bin/rm -f /etc/ssh/*key*

/bin/rm -f ~root/.bash_history
unset HISTFILE
/bin/rm -rf ~root/.ssh/
/bin/rm -f ~root/anaconda-ks.cfg
history -c
sys-unconfig

Reference Create a RHEL/CentOS 6/7 Template for VMware vSphere for an explanation of each of the steps in the script above.

Convert to a Template

You’ll convert the VM to a template in this final step. The template makes the machine reusable for the terraform automation. You can convert the machine to a template by selecting the machine, then selecting the “Template” action from any menu.

Take note of the folder location and name of template as this is needed in the setup variables for terraform deployment.

Openshift 3.x Manual Installation on vSphere

Please review the following documentation for manual installation on vSphere VMs: https://docs.openshift.com/container-platform/3.11/install/index.html

As manual installation is error prone, we recommend automation wherever possible.

Openshift 3.11 Terraform-based installation on vSphere

To install OpenShift 3.11 on VMware using Terraform, you need the following:

  • Terraform 0.12.x installation
  • VMware vSphere with API access
  • RedHat Network subscription for OpenShift
  • Sizing information for the cluster
  • DNS, subnet, gateway and available IPs for all cluster nodes

The first thing to install OpenShift 3.11 on a VMware information is to load the GIT repo for the VMware example:

git clone https://github.com/ibm-cloud-architecture/terraform-openshift3-vmware-example

The files that you get from the GIT repo are:

  • variables.tf
  • main.tf
  • infrastructure.tf
  • loadbalancer.tf
  • dns.tf
  • certs.tf
  • output.tf

In the deployment, you must configure and review each of the .tf files for your infrastructure, and create and configure a terraform.tfvars. We have attempted to separate them by the concerns in the filename.

Each *.tf contains modules that can be invoked for the deployment that you may not needed depending on your configuration. For example, if high-availability is not required, the loadbalancer.tf file can be deleted and related variables can be removed from the other files.

One thing that you should do is to decide on how you want to manage your DNS. Whether you are using CloudFlare with LetsEncrypt, an RFC2136 compliant Dynamic DNS, nip.io or others. If you choose others, then it is very advisable to perform the /etc/hosts file customization to ensure that all nodes are recognized properly.

Configuring Terraform

Accessing vSphere

Operation with the VMware infratructure is performed through the vSphere API. This section of the variables represents the resources that exists in the vSphere and must be identified to create the infrastructure. The snippet is below.

#######################################
##### vSphere Access Credentials ######
#######################################
vsphere_server = "vsphere-server.my-domain.com"

# Set username/password as environment variables VSPHERE_USER and VSPHERE_PASSWORD

##############################################
##### vSphere deployment specifications ######
##############################################
# Following resources must exist in vSphere
vsphere_datacenter = "CSPLAB"
vsphere_cluster = "Sandbox"
vsphere_resource_pool = "test-pool"
datastore_cluster = "SANDBOX_TIER4"

In the resources view, these are the vsphere hierarchy that must be identified to create the VM:

vsphere resource

The disk images for VMs are stored in either a Datastore or in a larger environments, a Datastore cluster. You can specify either of this, but not both, the example above is using the datastore_cluster the option is using the datastore option. Go to the datastore tab and the following is the illustration:

datastore

Note that these names are case sensitive.

vSphere storage class information

These values specify the vSphere username and password to access the datastore. When Openshift is installed, a storageclass named vsphere-standard will be used to create block volumes on the datastore specified using the vSphere user and password.

# for the vsphere-standard storage class
vsphere_storage_username = "<storageuser>"
vsphere_storage_password = "<storagepassword>"
vsphere_storage_datastore = "ds01"

Please see the following link for the required permissions needed to by the storage user.

Template

This information is needed also from the vSphere. OpenShift requires RedHat Enterprise Linux 7.4 or later VM template. You also should supply the credential to access the VMs that are created from the template. The credential can be using ssh_user and either one of ssh_password or ssh_private_key_file. The hostname prefix is used to prefix both the VM names and the hostname to be created in the DNS. Note that these names would have a random 8 hexadecimal characters to make sure that the hosts are unique.

template = "rhel-7.6-template"
# SSH username and private key to connect to VM template, has passwordless sudo access
ssh_user = "virtuser"
ssh_password = "<mypassword>"
ssh_private_key_file = "~/.ssh/id_rsa"

# MUST consist of only lower case alphanumeric characters and '-'
hostname_prefix = "ocp311"

Please see the following link for information about template preparation.

vSphere folder

The folder defined should not exist as the installation will create them, this folder may be a path on which the last qualifier will be created.

# vSphere Folder to provision the new VMs in, will be created
folder = "openshift311-folder"

Redhat account information

These are RedHat account to get either the RedHat Network subscription and getting the images for OpenShift. You can use the same username and password, but it is recommended that you create a RedHat service account for the image registry (see these instructions). The RedHat subscription pool id can be retrieved from this page and select the subscription ID that you wanted to use.

# it's best to use a service account for these
image_registry_username = "<registry.redhat.io service account username>"
image_registry_password = "<registry.redhat.io service acocunt password>"

rhn_username = "<rhn username>"
rhn_password = "<rhn password>"
rhn_poolid = "<rhn pool id>"

Networking settings

Networking variables must be configured for the VMs. You may provide values for configuring both a private and public networks. see also network.

The public network parameters are optional; if specified, it will place the bastion node on the public network. In these scenarios you may want to stand up two load balancers on both the private and public networks to expose client traffic.

As a note, the example private network above would generate the first IP of 192.168.101.11 as the mask + offset + 1 gives you that address. You would need 4 addresses in the public network and the number of nodes + 1 for the private network. If you happen to have only a single flat network (ie the public and private network are the same network) then you would have to code the offset at least with a difference of 4.

##### Network #####
private_network_label = "private_network"
private_staticipblock = "192.168.101.0/24"
private_staticipblock_offset = 10           # IP assignment starts at 192.168.101.11
private_netmask = "24"
private_gateway = "192.168.101.1"
private_domain = "internal-network.local"
private_dns_servers = [ "192.168.101.2" ]

Optionally, if you want to place the cluster on multiple networks, the cluster nodes where the pod overlay network is set up can be a completely private network, while the bastion host and load balancers can be placed on an external network.

public_network_label = "external_network"
public_staticipblock = "10.30.65.0/24"
public_staticipblock_offset = 30            # ips - [ 10.30.65.31, 10.30.65.32, 10.30.65.33 ]
public_netmask = "24"
public_gateway = "10.30.65.1"
public_domain = "my-public-domain.com"
public_dns_servers = [ "1.1.1.1" ]

This Terraform template also supports non-contiguous ip addresses. You can specify specific node IP addresses. In this example, the terraform will provision two worker nodes, at .13 and .14 while there will be 3 storage nodes, at .15, .16, .17.

##### Network #####
private_network_label = "private_network"
bastion_private_ip = ["192.168.0.10"]
master_private_ip = ["192.168.0.11"]
infra_private_ip = ["192.168.0.12"]
worker_private_ip = ["192.168.0.13", "192.168.0.14"]
storage_private_ip = ["192.168.0.15", "192.168.0.16", "192.168.0.17"]

private_netmask = "24"
private_gateway = "192.168.0.1"
private_domain = "my-private-domain.local"
private_dns_servers = [ "192.168.0.1" ]

DNS settings

The master_cname and app_cname may be manually defined in the DNS for accessing the console and application routes, or added automatically using one of the DNS modules. The master_cname is a CNAME record in DNS pointing at the master node or a load balancer distributing traffic to the master nodes. The app_cname is a wildcard CNAME record in DNS pointing at the infra node or a load balancer distributing traffic to the infra nodes.

# these were added to my public DNS, the app_cname is a wildcard
master_cname = "ocp-console.my-public-domain.com"
app_cname = "ocp-apps.my-public-domain.com"

Nodes definition and sizing

this section defines the number of nodes for each kinds and how large is the vcpu, memory and disk sizes are. The disk size minimal is determined by the template, it must be the same or larger than the template that you start on.

# node definitions
master = {
  nodes = "3"
  vcpu = "8"
  memory = "16384"

  disk_size = "100"
  docker_disk_size = "100"
  thin_provisioned = "true"
  keep_disk_on_remove = false
  eagerly_scrub = false
}

infra = {
  nodes = "3"
  vcpu = "8"
  memory = "32768"

  disk_size = "100"
  docker_disk_size = "100"
  thin_provisioned = "true"
  keep_disk_on_remove = false
  eagerly_scrub = false
}

worker = {
  nodes = "3"
  vcpu = "8"
  memory = "32768"

  disk_size = "100"
  docker_disk_size = "100"
  thin_provisioned = "true"
  keep_disk_on_remove = false
  eagerly_scrub = false
}

storage = {
  nodes = "3"
  vcpu = "4"
  memory = "16384"

  disk_size = "100"
  docker_disk_size = "100"
  gluster_num_disks = "1"
  gluster_disk_size = "200"
  thin_provisioned = "true"
  keep_disk_on_remove = false
  eagerly_scrub = false
}

vSphere Credentials

Once you have all the variables customized, you also need to setup some environment variables to store your credentials:

  • VSPHERE_USER and VSPHERE_PASSWORD: user ID and password to access the VMware vSphere environment

Use the following command:

export VSPHERE_USER=<user>
export VSPHERE_PASSWORD=<password>

Provision the environment using Terraform

  • Initialize Terraform environments (pulling in modules and plugins)

    terraform init
    
  • use plan to see what terraform will do and validate the variables are correct:

    terraform plan
    
  • Provision the environment

    terraform apply -auto-approve
    

The result should gives you the OpenShift 3.11 environment.

You should be able to access the environment using the URL of https://ocp-console.<domain> and logging in initially as admin with the password of admin. Once additional set up of the environment is complete, you may want to configure a different identity provider for Openshift by following the documentation.

close