By Ruhan Wagener
Acting Director, Managed Services & Architecture

Qlik on Kubernetes

This document will outline the steps required to successfully deploy Qlik Sense on Kubernetes using Minikube and as a secondary step, set up and configure Qlik Sense Multi Cloud to integrate with your Kubernetes deployment

What you need

  1. Qlik Sense Enterprise license enabled for multi cloud in JWT format
  2. An instance of Linux where your Kubernetes cluster will be deployed
  3. An understanding and familiarity with Linux distributions and basic commands
  4. An Identity Provider which supports OpenID Connect (OIDC)

What I used in my deployment

Linux
Operating System: Ubuntu 18.04.2 LTS
Kernel: Linux 4.15.0-1044-aws
Architecture: x86-64 
Docker
Client:
 Version:           18.06.2-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        6d37f41
 Built:             Sun Feb 10 03:47:56 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.2-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       6d37f41
  Built:            Sun Feb 10 03:46:20 2019
  OS/Arch:          linux/amd64
  Experimental:     false
Minikube
version: v1.2.0
Helm
Client: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
Kubectl
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Identity Provider

What are these components

Linux is our operating system of choice which is the foundation on top of which everything else will be built.
Docker is our containerization software which will be the microservices architecture where Qlik will be deployed.
Minikube is the simplest form of a Kubernetes cluster and should only be used for development (which is the case for our current use case). This enables you to get up and running in the shortest space of time without having to struggle with more complex Kubernetes concepts.
Helm is essentially the package manager for Kubernetes applications.
Kubectl is the command line interface (CLI) for interacting with your Kubernetes cluster.

What you're here for - getting it done!

Launch your Linux instance

In my case, I'm using AWS to host my Linux machine but you can do your deployment using VirtualBox or any preferred virtualization software or cloud provider of choice like Azure or Google. Once launched, gather the necessary information to SSH onto your newly deployed instance. I'm using a Mac and simply leveraging the native Terminal Shell. Depending on your environment, the way you connect to your Linux instance may be slightly different than mine below.

SSH to my Linux instance
ssh -i "my_key_file.pem" ubuntu@ec2-3-200-100-201.compute-1.amazonaws.com

This is done from the Mac Terminal and allows me to remotely connect to my Linux instance. From here, I can run the needed commands to install and configure Qlik Sense on Kubernetes.

From here, we can start the process of preparing our instance for QSonK8S. I recommend for the purpose of this exercise to run as root to avoid several permission related errors and problems. Keep in mind that this is just to get you started and off to the races with QSonK8S and not a recommended deployment for a production environment. I'll also share some useful links later on to help you further your journey in this space.

So, you'll want to run the below commands in the following sequence before moving on to the next phase.

1st
sudo su root

You may be prompted for a password depending which user you're currently signed in with

2nd
sudo apt-get update

When running as root, you don't need "sudo" as you're already elevated with the highest possible privileges. From here on, we'll just run the commands without sudo

3rd
swapoff -a

We want to disable swap permanently, so modifying fstab is necessary

sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
4th
apt-get install -y socat

Install Docker

We are now ready to install docker, which will be needed to run Qlik within containers using Minikube.

I used the steps outlined here to install the needed Docker components. Follow in sequence by uninstalling any existing docker components (if you're recycling a Linux machine or sharing the resource) and moving on to installing Docker Engine using the repository (1st option).

Once you've completed the below command, you should be able to run the hello-world example.

apt-get install docker-ce docker-ce-cli containerd.io

Install Minikube

There are various options available to you for the installation of Minikube. I used the Github repository to pull the latest stable version at the time (as indicated at the start of this document). You can grab the latest version as well as previous versions here

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.2.0/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube

Install Kubectl

Since my Linux distribution supports snap (a package manager), I used that for the installation of kubectl as it was the easiest option. Various other options are available to you here

snap install kubectl --classic
kubectl version

Running the second command above should confirm that you have successfully installed kubectl

Container Runtimes

You need a few more configuration steps to allow the Docker daemon to function properly with your minikube deployment. Run the below commands:

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

Next, you'll need to restart docker.

systemctl daemon-reload
systemctl restart docker

In my deployment, I had instances where minikube reported the following error:

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

After investigation, it seemed that there was a mismatch between the kubelet service and the docker daemon cgroup driver specified. Make sure that these match to avoid similar errors in your deployment. You can check the respective cgroup drivers with the below commands:

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
cat /etc/docker/daemon.json
kubelet and kubeadm

Reference here. There is some overlap with the installation of kubectl and the below components. If you prefer, you can install them all at once instead of using snap like I did earlier on.

apt-get install -y kubelet kubeadm

Start Minikube

minikube start --vm-driver=none --memory 4096 --cpus=2

This command usually takes a while (depending on the size of your instance and resources available). You should see the below message when your minikube has launched successfully.

Confirm that kubectl is configured to communicate with your Minikube cluster by running the following:

kubectl config current-context

The result should be "minikube"

Install Helm

Once again, I used snap to install helm. You can find more information on helm in general here as well as alternative installation options.

snap install helm --classic

Next, we need to configure RBAC with tiller. You can read more about this here if you're curious.

kubectl create -f rbac-config.yaml

The command above references a yaml config file. You can use the below extract and save to your directory of choice with the same filename to keep things simple.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Now, initialize your helm configuration to include the RBAC config you want to apply above.

helm init --service-account tiller --upgrade

Install Qlik

Add the qlik helm chart to your helm repository. Since I used snap earlier on in the configuration, I need to add /snap/bin to my PATH variable first.

export PATH="/snap/bin/:$PATH"
helm repo add qlik https://qlik.bintray.com/stable

You can verify that the repo was added by running the below.

helm repo list

You should see the qlik repo in your helm repo list

helm install -n qliksense qlik/qliksense -f values.yaml

values.yaml contents below. Since I'm using Auth0, the values.yaml file below matches that configuration. I will explain the setup and configuration of Auth0 in the next section of this document.

# MINIKUBE SPECIFIC SETTINGS (dont not use with other K8 providers)_____________
#This setting enables dev mode to include a local MongoDB install
devMode:
  enabled: true

#This setting accepts the EULA for the product
engine:
  acceptEULA: "yes"
  
elastic-infra:
 nginx-ingress:
   controller:
     service:
       type: NodePort
       nodePorts:
         https: 32443
     extraArgs.report-node-internal-ip-address: ""

hub:
 ingress:
   annotations:
     nginx.ingress.kubernetes.io/auth-signin: https://$host:32443/login?returnto=$request_uri

management-console:
 ingress:
   annotations:
     nginx.ingress.kubernetes.io/auth-signin: https://$host:32443/login?returnto=$request_uri

identity-providers:
  secrets:
    idpConfigs:
      - discoveryUrl: "https://YOUR_AUTH0_TENANT/.well-known/openid-configuration"
        clientId: "YOUR_CLIENT_ID (single page application, not M2M)"
        clientSecret : "YOUR_CLIENT_SECRET"
        realm: "auth0"
        hostname: "YOUR_HOSTNAME (example - elastic.example)"
        claimsMapping:
        #DO NOT CHANGE ANY OF THE BELOW VALUES
          client_id: [ "client_id", "azp" ]

Depending on your hostname configuration, you may need to add an entry to the hosts file. You can do this as per below.

sudo nano /etc/hosts

Add an entry for your hostname, save it and exit. Using the example hostname above, the new entry into the hosts file would look like this:
127.0.0.1 elastic.example

This simply maps the localhost to also recognise "elastic.example" and resolve the hostname.

You can now run the below command and watch as the various Qlik containers are created. Allow a few minutes for this to happen and all containers to be in "Running" status.

kubectl get pods

Variation
If you don't want to keep running the same command repetitively to see the current status, you could just alter it to this:

watch kubectl get pods

to see pods updating live.

How do I access my minikube deployment?

You are going to need some form of DNS resolution to access your new deployment. I've deployed in AWS and used Route53 to resolve the DNS to the public IP of my instance. My minikube ip is the same as my private IP and therefore no further configuration is necessary. Your setup may be different. So, I added an A-record entry into my routing table to resolve my friendly name (in the example above "elastic.example" to my public IP address).

Opening a browser and navigating to: https://elastic.example:32443 therefore resolves to my minikube deployment. Using the public IP as is won't work as that isn't part of our values.yaml file.

Configure a TLS certificate

Optional
It is recommended that you deploy a TLS certificate as part of your deployment to avoid the warning messages received in the browser stating that your connection is not secure.

Create a file called "secret.yaml" with the following contents:

apiVersion: v1
kind: Secret
metadata:
  name: bardess
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: LS0tBG1CRUdJ
  tls.key: LS0tBG1CRUdJ

The tls.crt value would be obtained from your .crt file and the tls.key value from your .key file respectively. These need to be in base64 encoded format. You can pull the values as follows:

cat tls.crt | base64

copy the value into your yaml file

cat tls.key | base64

copy the value into your yaml file

Now, run the below to create the resource in K8S:

kubectl apply -f secret.yaml

Verify:

kubectl get secret bardess

Update the relevant section of your values.yaml file to reflect the certificate configuration:

elastic-infra:
 nginx-ingress:
   controller:
     service:
       type: NodePort
       nodePorts:
         https: 32443
     extraArgs.report-node-internal-ip-address: ""
     extraArgs:
       default-ssl-certificate: "default/bardess"

Run the below helm command to update your configuration:

helm upgrade --install qliksense qlik/qliksense -f values.yaml

Auth0 setup and configuration

If you don't already have an Auth0 tenant, you can easily create one on https://auth0.com. Once you have a tenant created, sign in.

Single Page Web Application

From the dashboard, select applications and create a new application. The first (of two) applications will be a single page web application as per below:

Auth0App1

Once created, go to the Settings tab (skip the Quick Start). Scroll down to Allowed Callback URLs and enter the following (adjusting for whatever your hostname is). Refer back to your values.yaml file above for the value used. I'm using the example value for continuity and simplicity.

Auth0Callback

I also added the hostname to Allowed Web Origins and Allowed Origins (CORS) although this is not an explicit requirement. To avoid any errors with redirect, this was a safe play and would assist in possible troubleshooting as needed later on. Scroll down to Advanced Settings and take note that OIDC is toggled (it is by default). OIDC is required for QSonK8S.

Auth0Advanced1

Next, navigate to Endpoints and note down the OpenID configuration endpoint. You will need this value in your values.yaml file.

Endpoint1

Scroll back up to the top of your newly created Auth0 application and copy the values for ClientID and Client secret.

ClientID

These values need to be entered in your values.yaml file. Using this new application we've just built, your values.yaml file should look as follows:

# MINIKUBE SPECIFIC SETTINGS (dont not use with other K8 providers)_____________
#This setting enables dev mode to include a local MongoDB install
devMode:
  enabled: true

#This setting accepts the EULA for the product
engine:
  acceptEULA: "yes"
  
elastic-infra:
 nginx-ingress:
   controller:
     service:
       type: NodePort
       nodePorts:
         https: 32443
     extraArgs.report-node-internal-ip-address: ""

hub:
 ingress:
   annotations:
     nginx.ingress.kubernetes.io/auth-signin: https://$host:32443/login?returnto=$request_uri

management-console:
 ingress:
   annotations:
     nginx.ingress.kubernetes.io/auth-signin: https://$host:32443/login?returnto=$request_uri

identity-providers:
  secrets:
    idpConfigs:
      - discoveryUrl: "https://bardess.auth0.com/.well-known/openid-configuration"
        clientId: "sUGs5zsbQQ11snf02PbLRMHy3nW9TzSF"
        clientSecret : "4u4r78HE6VlT9kk9_CTPwEN2dMq1aTqgL3bkuJIM0JlJaj2UfVgfKkdyqfr0sj5y"
        realm: "auth0"
        hostname: "elastic.example"
        claimsMapping:
          client_id: [ "client_id", "azp" ]

If you deployed Qlik Sense with edge-auth using the default Qlik auth example earlier, you'll need to run the following helm command to update your deployment with the new Auth0 configuration. If at any point you make changes to your values.yaml file, you would need to run this command as well.

helm upgrade --install qliksense qlik/qliksense -f values.yaml

This should be all you need to get authentication working with your standalone Kubernetes cluster running on Minikube. For multi cloud integration, there are additional steps outlined below.

Machine to Machine application

To enable your Windows cluster to distribute applications to your Kubernetes cluster, you are going to need programmatic access between these two. Basically the two clusters need to communicate and be authorized in an automated fashion. This is done through an M2M application which allows QSonK8S to authenticate against Auth0 using the ClientID and Client Secret.

On your Auth0 tenant, navigate to APIs and Create API. Enter the values as per below. You can call your API anything, but the Identifier should be qlik.api

Once created, jump to the Permissions tab. In both text boxes, enter "any" as the Scope and click Add.

From here, navigate to the Machine to Machine Applications tab. You'll notice that a new M2M test application was automatically created. Make sure to authorize this application within the "any" scope we created above.

Next, we need to test the auth process from our K8S deployment. Go to the Test tab on your API and copy the API call in the first codebox:

curl --request POST \
  --url https://bardess.auth0.com/oauth/token \
  --header 'content-type: application/json' \
  --data '{"client_id":"pC0fMfucXWJ9VKRbt2129r64marYyHrT","client_secret":"h54HLu7zaf8OalrFFpqApz8e-78Uel3gbx344Y6PWjDzDUxqzpW_CL1WAVGBethz","audience":"qlik.api","grant_type":"client_credentials"}'

Note a few things:
1. The Client ID - this is the value from your M2M application, NOT the single page application which has been configured for interactive logon
2. The Client Secret - matching client secret from your M2M application
3. The header content-type - this should be application/json
4. You'll need to manually add the -k flag to allow for insecure comms. I will add the steps needed to get around this later

Run this command from your K8S cluster (our Linux instance) terminal. You should get a response with a bearer token. Copy this value and replace it in the command below to reflect your token.

curl -k --request GET \
  --url https://elastic.example:32443/ \
  --header 'authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ik1URXpSams0TVVVNFJEaEROVVZDUkRGRVF6UTVNVFEyUVRnMk1EWTBOREpGUVRKQk1qSkZSUSJ9.eyJpc3MiOiJodHRwczovL2JhcmRlc3MuYXV0aDAuY29tLyIsInN1YiI6IlVrTmNrSlhBSXpjUEFyREd5V0ZUcFV3VzhMb2xPWE9UQGNsaWVudHMiLCJhdWQiOiJxbGlrLmFwaSIsImlhdCI6MTU2MzQ5MDQ1NiwiZXhwIjoxNTYzNTc2ODU2LCJhenAiOiJVa05ja0pYQUl6Y1BBckRHeVdGVHBVd1c4TG9sT1hPVCIsInNjb3BlIjoiYW55IiwiZ3R5IjoiY2xpZW50LWNyZWRlbnRpYWxzIn0.HpOwYD7W1EO-LEwlyHu7uXPk81MDOaz9iXhZiboqHymTwOCnYjFyMo6Ery9evBhdrvfzqwvn_zwajfqJtyTm2b79HZSWmHeu7aAwlUL0Glm4d4cMY6cD33gLCIqx4RMKnXvxYODY5NMKRPdmN8Ag1eobyOR8eTn020P6GtrbzFeTvYK6iIwQBeYaKaCkalZPZ-DM34B2j-7-3AOFcViV-p7mWmUx8JQMBrw65gz_nvn1yxi-BiJMWnz9AS7wD4aRyXcqEYyVfJ_-4O2QNtfn-E2xFqWyCHCkTSKa9SHJfmLdU4vXFRGO3_yQJV-6PCR_QLWdbCZUnqRyvtHLjKV25Q'

The -k flag is to enable insecure curl requests which you'll need prior to adding your TLS configuration and updating etc/ssl/certs/ca-certificates.crt to allow trust with your custom cert
You can use https://jwt.io/ to decipher your bearer token and make sure the values returned are valid and match your values.yaml config

Multi cloud deployment

You should now be ready to add your K8S cluster deployment to your Windows cluster. In order to do this, navigate to https://YOUR_QSfWindowsHOST/api/msc


Select Deployments and Set up new.

The values required in these fields are shown at the hand of the example we've been using.

Remember to use the M2M Cliend ID and Client Secret values here, NOT the single page app values
You can retrieve the Token endpoint from any of your Auth0 applications by navigating to the bottom of an app, expanding Advanced settings and jumping to the "Endpoints" tab. You will need the "OAuth Token URL" value


Click Apply. You should receive confirmation that your new deployment was successfully created.

Distribution Policy

The Qlik help site has some really good guides and resources on how you can create various policies to control distribution from your Windows cluster to your K8S cluster(s). I have added a very basic example below to get you started.

Navigate to your QMC (Windows), custom properties and create a new custom property. You can copy my example verbatim.

Next, from the QMC, navigate to Distribution Policies, Create New. Once again, copy and paste my example.

This rule is basically saying "any application which has the custom property @deployments value of 'elastic' should be distributed to the K8S cluster"

Now, navigate to your Qlik Sense hub (Windows). Right-click any application which you own (right-click context won't be available to you if you aren't the owner of the respective application) and select Manage properties.

I recommend choosing a simple, small application for the first deployment

You'll notice the custom property you previously created is now available to assign to this application. Complete the assignment and click Apply.

Finally, go to your Kubernetes cluster hub @ https://elastic.example:32443 to confirm that your application has been successfully distributed.

If you have gotten this far, congratulations, you can now crown yourself as a future master of the universe and no longer have to frown when people talk about things like Docker, Kubernetes and Helm.

Follow me on Twitter and LinkedIn.