July 1, 2020

Kubernetes vs OpenShift


Kubernetes and OpenShift have a lot in common. Actually OpenShift is more or less Kubernetes with some additions. But what exactly is the difference?

It’s not so easy to tell as both products are moving targets. The delta changes with every release - be it of Kubernetes or OpenShift. I tried to find out and stumbled across a few blog posts here and there. But they all where based on not so recent versions - thus not really up-to-date.

So I took the effort to compare the most recent versions of Kubernetes and OpenShift.
Before we dive into the comparision let me clarify what we are actually talking about. I will focus on bare Kubernetes, i.e. I will ignore all additions and modifications that come with the many distributions and cloud based solutions. On the other hand I will talk about Red Hat OpenShift Container Platform (OCP), being the enterprise product derived from OKD aka The Origin Community Distribution of Kubernetes that powers Red Hat OpenShift, previously know as OpenShift Origin. For more info Kubernetes online training

Kubernetes vs OpenShift

Base
Both products differ in the environment they can run in. OpenShift is limited to Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux Atomic Host. I suppose this limitation is less due to technical reasons, but because Red Hat wants to make supporting OpenShift more viable. This assumption is supported by the fact that OKD also can be installed on Fedora and CentOS.

On the other hand Kubernetes doesn’t impose many requirements concerning the underlying OS. Its package manager should be RPM or deb based - which means practically every popular Linux distribution. But probably you better stick to the most often used distributions Fedora, CentOS, RHEL, Ubuntu or Debian.

This applies for so-called bare metal installations (including virtual machines). It should be mentioned that creating a Kubernetes cluster on that level requires quite some effort and skills. effective way of learning from[* [Kubernetes certification training https://onlineitguru.com/kubernetes-training.html] ]


But this is the age of cloud computing, so deploying Kubernetes on an IaaS platform or even using managed Kubernetes clusters are also practicable approaches. Kubernetes can be deployed on any major IaaS platform: AWS, Azure, GCE, ….

Compared to that there is only a limited selection of OpenShift service providers. OpenShift Online where you get your own projects on a shared OpenShift cluster and OpenShift Dedicated to get your own dedicated OpenShift cluster in the cloud - the latter based on Amazon Web Services (AWS). If you try really hard you also find a few more providers, like T-System’s AppAgile PaaS, DXC’s Managed Container PaaS, Atos’ AMOS and Microsoft’s OpenShift - the latter two only announced yet. Like Kubernetes OpenShift can also be run on the all major IaaS platforms.

Rollout
Rolling out Kubernetes is not an easy task. As a consequence of the multitude of platforms it runs on, together with the diversity of options for additional required services there is an impressive list of ‘turnkey solutions’ promising to facilitate creating Kubernetes clusters on premises with only a few commands’. Most (if not all) are based on one of the following installers.

RKE (Rancher Kubernetes Everywhere)

Installer of Rancher Kubernetes distribution
kops: Installer maintained by the Kubernetes project itself to roll out OpenShift on AWS and (with limitations) also on GCP.
kubespray: Community project of a Kubernetes installer for bar metal and most clouds based on Ansible and kubeadm.
kubeadm: Is also an installer provided by the Kubernetes project. It’s more focused on bare metal and VMs. It imposes some prerequisites concerning the machines and is less of a ‘do it all in one huge leap’ tool. It can also be used to add and remove a single node to / from an existing cluster.
kube-up.sh: deprecated predecessor of kops
OpenShift on the other hand aims to be a full-fletched cluster solution without the need to install additional components after the initial rollout. Apparently it is mainly targeted towards manual installation on physical (or virtual) machines. Consequently it comes with its own installer based on Ansible. It does a decent job installing OpenShift based on only a minimal set of configuration parameters. However, rolling out OpenShift is still a complex task. There is a plethora of options and variables to specify the properties of the intended cluster.

Web-UI
When it comes to administrating the cluster and checking the status of the various resources via a web based user interface you hit one big difference between Kubernetes and OpenShift.

Kubernetes offers the so called dashboard. In my opinion it’s just an afterthought. It’s not an integral part of the cluster but has to be installed separately. Additionally it’s not easily accessible. It’s not just firing up a certain URL, but you have to use kube proxy to forward a port of your local machine to the cluster’s admin server.

The result is a web UI that indeed informs you about the status of many components but turns out to be of limited values for real day-to-day administrative work since it lacks virtually any means to create or update resources. You can upload YAML files to achieve that. So what’s the gain compared to using kubectl?

Compared to that OpenShift’s web console truly shines. It has a login page. It can be accessed without jumping several loops. It offers the possibility to create and change most resources in a form based fashion. online kubernetes course will helps you to learn more skills and techniques.


With the appropriate rights the web UI also offers you the cluster console for a cluster wide view of many resources (e.g. nodes, projects, cluster role bindings, …). However, you cannot administrate the cluster itself via the web UI (e.g. add, remove or modify nodes).

Integrated image registry
There is no such thing as an integrated image registry in Kubernetes. You may setup and run your own private docker registry. But as with many additions to Kubernetes the procedure is not well documented, cumbersome and error-prone.

OpenShift comes with its integrated image registry that can be used side by side with e.g. Docker Hub and Red Hat’s image registry. It is typically used to store build artifacts and thus cooperates nicely with OpenShift’s ability to build custom images (see below).

Not to forget the registry console that presents you valuable information about all images and image streams, their relation to the cluster’s projects and permissions on the streams. The latter being a prerequisite for OpenShift’s ability to host multiple tenants hiding the artifacts of one projects from members of other projects.

Image streams
Image streams is a concept unique to OpenShift. It allows you to reference images in image registries - either internal in your OpenShift cluster or some public registry by means of tags (not to be confused with tags of docker images). You can even reference a tag within the same image stream. The power of image streams comes from the ability to trigger actions in your cluster in case the reference behind a tag changes. Such a change can be caused either by uploading a new image to the internal image registry or by periodically checking the image tag of an external image registry. In both cases the corresponding image stream tag(s) is / are updated and the action(s) are triggered.

Kubernetes has nothing like that - not even as a third party solution that can be added separately.

Builds
Builds is another core concept of OpenShift not available in Kubernetes. It is realized by means of jobs of the underlying Kubernetes. We have Docker builds, source-to-image builds (S2I), pipeline builds (Jenkins) and custom builds. Builds can be triggered automatically when the build configuration changes or when a base image used in the build or the code base of a source-to-image build is updated. Typically the resulting artifact is another image that is uploaded to the internal image registry triggering further actions (like deployment of the new image).

Nothing comparable exists in Kubernetes. You may craft you own image and run it as a job to mimic any of the above mentioned build types. But it will still lack the property of being triggered when some of the input gets updated or of triggering further actions.

Jenkins inside
Pipeline build is a special form of source-to-image builds. It’s actually an image containing a Jenkins that monitors configured ImageStreamsTags, ConfigMaps and the build configuration and starts a jenkins build in case of any updates. Resulting artifacts are uploaded to image streams, which may automatically trigger subsequent deployments of the artifacts.

As already mentioned in the previous section there is nothing like that in Kubernetes. However, you may build and deploy your own custom Jenkins image that will drive your CI / CD process. The resulting artifacts will be docker images uploaded to some image repository. By means of the Jenkins Kubernetes CLI plugin these artifacts can then be deployed in the cluster. But it’s all hand crafted.

Deployment of applications
The native means of deploying an application that consists of several components (pods, services, infress, volumes, …) in Kubernetes is Helm. It is superior to OpenShift Templates. Thus OpenShift Templates can be converted into Helm Charts, but not the other way round. Apparently Helm was not designed with a security or enterprise focus in mind. So by default running Helm requires privileged pods (for Tiller) making it possible for anybody to install an application everywhere in the cluster. Helm can be tweaked to be more secure and aware of Role Based Access Control (RBAC). Helm cannot be deployed in an OpenShift cluster due to the mentioned security concerns.

Actually you can deploy Helm on OpenShift but you have to jump several loops, consider (i.e. ignore) security implications and still end up with a Helm installation that is somewhat limited compared to one on Kubernetes.

OpenShift comes with two mechanismes to deploy applications: Templates and Ansible Playbook Bundles.

Templates predate Helm Charts in Kubernetes. The concept is pretty simple. One YAML file with descriptions of all required cluster resources. The descriptions can be parameterized by means of placeholders that are substituted by concrete values during deployment.

Ansible Playbook Bundles (ABP) are way more flexible. They are basically docker images with the Ansible runtime and a set of Ansible Playbooks to provision, deprovision, bind and unbind an application.

Both Templates and APBs can be made available in the respective Service Broker (Template Service Broker and OpenShift Ansible Broker).

Service Catalog, Service Broker
Service catalog is an optional component of Kubernetes that needs to be installed separately. After installation it needs to be wired together with existing service brokers by means of creating ClusterServiceBroker instances. Operator can then query the service catalog and provision / unprovision offered services. The service catalog of Kubernetes is more targeted at managed services offered by cloud providers, less at means to provision services within the cluster. for more Kubernetes online course

Exposing services
Everything that makes a group of pods with equal functionality accessible through a well-defined gateway is called a service. In Kubernetes there is a confusing diversity of options concerning services: ClusterIp, NodePort, LoadBalancer, ExternalName, Ingress. Everything except ClusterIP are means to make a service accessible by the outside (i.e. cluster external) world. NodePort is not recommended for anything but ad-hoc access during development and troubleshooting. Ingress is a reverse proxy for HTTP(S) forwarding traffic to a certain service based on host name (virtual hosts) and / or path patterns. It also handles TLS termination and load balancing.

LoadBalancer, ExternalName and Ingress are not available in Kubernetes out-of-the-box but have to be provided by third-party solutions. Typically the cloud provider that runs the underlying infrastructure of the Kubernetes cluster also offers the components required for these options. So when you run your own cluster on bare metal or with virtual machines you again have to tackle the task to install the required components by hand.

Basic authentication: Authentication headers in requests are verified against a clear-text(!) password file. This file also delivers group names.

OpenID tokens: This option requires a stand-alone identity provider that delivers ID tokens. Additionally it requires an OIDC plugin for kubectl. There is no ‘behind the scene’ communication between the Kubernetes API server and the identity provider. Instead the ID token is simply verified against the certificate of the identity provider.

Webhook tokens:
Tokens are provided and verified by an authentication service that implements a simple REST-like API.

Authentication proxy: Sits between client and Kubernetes API server. Adds information about authenticated user, groups and extra data as configurable request headers (e.g. X-Remote-User, X-Remote-Group) Transmission of credentials from client to proxy and actual authentication is totally up to the proxy.