openshift

Recovering kubeconfig for a Cluster Created with RHACM

Red Hat Advanced Cluster Management for Kubernetes and it’s upstream Open Cluster Management automate cluster lifecycle management from creation, configuration, upgrade, and destruction. If a cluster is created by RHACM you may need to download the kubeadmin password and the kubeconfig. This is easily accomplished by browsing to the RHACM cluster overview, but how do you do the same from the CLI? ClusterDeployment The creation of a cluster starts with a ClusterDeployment which will be interpreted by Hive.

Continue reading

Deploying a Cross-platform Windows and Linux Application to OpenShift

An application can sometimes require diverse components that span technology stacks. There may be a depency on a legacy component built for Windows which may not be suitable for deployment to Linux. The good news is it may still be suitable for deployment to Kubernetes. With a Windows node in your OpenShift cluster you can deploy cross-platform applications that can simultaneously leverage the strengths of Linux and Windows.

Continue reading

Adding a Windows Node to an OpenShift Cluster

The Windows Machine Config Operator builds and configures Windows machines to act as nodes in an OpenShift cluster enabling cross platform workloads. This post will demonstrate the addition of a Windows node to an existing cluster and explore the integration of Windows and Kubernetes.

Continue reading

Installing OpenShift on Azure for Windows Containers

Adding support for Windows nodes in your OpenShift cluster is a day 2 operation that requires preparation at install time. It is important to accommodate the hybrid networking requirements for Windows Kubernetes nodes. Azure specific tasks and gotchas are highlighted in this part 1 of 3 while laying the groundwork applicable to deploying OpenShift on any provider in preparation for managing Windows containers.

Continue reading

How do OpenShift Over The Air Updates Work?

OpenShift 4 extends the operator pattern introduced by CoreOS, and enables automated management of the Kubernetes cluster and the underlying resources including machine instances and operating system configuration. Operator driven over the air updates enable automated updates much like you are accustomed to receiving for your smart phone. What follows is a a technical exploration of the OpenShift over the air updates implementation. Operators All the Way Down What is an “Operator”?

Continue reading

OpenShift 4 on OpenStack Networking and Installation

OpenShift Containter Platform 4 is much more like Tectonic than OpenShift 3. Particularly when it comes to installation and node management. Rather then building machines and running an Ansible playbook to configure them you now have the option of setting a fewer paramters in an install config running an installer to build and configure the cluster from scratch. I would like to illustrate how the basics of the networking might look when installing OpenShift on OpenStack.

Continue reading

Playbook to replace bootstrap.kubeconfig and node certificates on OpenShift 3.10 3.11

If you are a serial upgrader like me, you may have found that at one point during your 3.10.xx patching (say 3.10.119) you hit this error during the data plane upgrade: TASK [openshift_node : Approve the node] ************************************************************ task path: /usr/share/ansible/openshift-ansible/roles/openshift_node/tasks/upgrade/restart.yml:49 Using module file /usr/share/ansible/openshift-ansible/roles/lib_openshift/library/oc_csr_approve.py ... FAILED - RETRYING: Approve the node (30 retries left).Result was: { "all_subjects_found": [], "attempts": 1, "changed": false, "client_approve_results": [], "client_csrs": {}, "failed": true, "invocation": { "module_args": { "node_list": [ "ose-test-node-01.

Continue reading

Downgrade Etcd 3.3.11 to 3.2.22 for OpenShift Compatibility

While I was working on migrating etcd to my master nodes I was bitten by an incompatible etcd v3.3.11 RPM made available via RHEL Server Extras repo. Before I got to my last master the RPM was no longer available, and the scaleup playbook failed. I became aware that 3.3.11 is not compatible and should not have been made available. Unfortunately all members of my etcd cluster were already upgraded and the fix is to take down the cluster, downgrade etcd, and restore from snapshot.

Continue reading

Etcdctl v2 and v3 Aliases for Peer Authenticated Commands

Getting all the arguments to etcdctl right can be a bit of a pain. Here are a couple of aliases which take advantage of the values in the etcd.conf file. alias etcd2='. /etc/etcd/etcd.conf && \ ETCDCTL_API=2 etcdctl \ --cert-file ${ETCD_PEER_CERT_FILE} \ --key-file ${ETCD_PEER_KEY_FILE} \ --ca-file ${ETCD_PEER_TRUSTED_CA_FILE:-$ETCD_PEER_CA_FILE} \ --endpoints "${ETCD_ADVERTISE_CLIENT_URLS}"' alias etcd3='. /etc/etcd/etcd.conf && \ ETCDCTL_API=3 etcdctl \ --cert ${ETCD_PEER_CERT_FILE} \ --key ${ETCD_PEER_KEY_FILE} \ --cacert ${ETCD_PEER_TRUSTED_CA_FILE:-$ETCD_PEER_CA_FILE} \ --endpoints "${ETCD_ADVERTISE_CLIENT_URLS}"' If you are using OpenShift, you may also find that you already have some bash functions enabled by the etcd role in /etc/profile.

Continue reading

Migration of Etcd to Masters for OpenShift 3.9 to 3.10 Upgrade

As of OpenShift Container Platform 3.10 etcd is expected to run in static pods on the master nodes in the control plane. You may have a deployed an HA cluster with dedicated etcd nodes managed with systemd. How do you migrate the this new architecture? Assumptions: You are running OCP 3.9 You have multiple Master nodes You have dedicated Etcd nodes You are running RHEL, not Atomic nodes Outline: Backup etcd Scale up Etcd cluster to include Master nodes Configure Openshift Masters to ignore the old Etcd nodes Scale down etcd cluster to remove old Etcd nodes Detailed Steps Follow along in this document https://docs.

Continue reading