Azure Self-managed Kubernetes High Availability for Open5gs [part 2]

[Part 2] Deploy Istio, Rancher and ROOK CEPH in Self-managed Kubernetes High Availability in Azure

open5gs Lab Topology

source :

Only Controller-0 node

SSH to controller-0 Nodes

ssh kuberoot@

Create Namespaces for open5gs

kubectl create ns open5gs

Install Service mesh Istio (optional)

curl -L | ISTIO_VERSION=1.12.2  sh -
cd istio-1.12.2/
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y

Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later:

kubectl label namespace open5gs istio-injection=enabled

Install Addons packages

cd ~/istio-1.12.2/samples/addons
kubectl apply -f prometheus.yaml #for data sources monitoring
kubectl apply -f kiali.yaml #for dashboard visualization istio
kubectl apply -f jaeger.yaml #for tracing log
kubectl apply -f grafana.yaml #for dashboard monitoring (visualization)

Install HELM

We use HELM for automatic deployment in kubernetes.

curl -fsSL -o
chmod 700
helm version

Install Rancher (optional)

helm repo add rancher-latest
kubectl create namespace cattle-system

Install crds Cert Manager

kubectl apply -f

Add the Jetstack Helm repository

helm repo add jetstack

Update your local Helm chart repository cache

helm repo update

Install the cert-manager Helm chart

helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.5.1
kubectl get pods --namespace cert-manager

Install Rancher with Helm

helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set \
--set replicas=3

Wait for Rancher to be rolled out

kubectl -n cattle-system rollout status deploy/rancher

Verify that the Rancher Server is Successfully Deployed

kubectl -n cattle-system get deploy rancher

Install ROOK CEPH from Controller-0

cd ~
git clone --single-branch --branch v1.7.2
cd rook/cluster/examples/kubernetes/ceph/

Deploy the Rook Operator

kubectl create -f crds.yaml -f common.yaml -f operator.yaml

Create a Ceph Cluster

kubectl create -f cluster.yaml

Wait until all pods running

kubectl get pods -n rook-ceph --watch
ctrl + c
kubectl get pods -n rook-ceph

Deploy Rook Ceph toolbox

The Rook Ceph toolbox is a container with common tools used for rook debugging and testing.

cd ~
cd rook/cluster/examples/kubernetes/ceph
kubectl apply -f toolbox.yaml

Viewing the Dashboard External to the Cluster

Node Port

The simplest way to expose the service is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort.

Now create the service:

kubectl create -f dashboard-external-https.yaml

You will see the new service rook-ceph-mgr-dashboard-external-https created:

kubectl -n rook-ceph get service

Once the rook-ceph-tools pod is running, you can connect to it with:

kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

All available tools in the toolbox are ready for your troubleshooting needs.


  • ceph status
  • ceph osd status
  • ceph df
  • rados df

Exit to controller-0


Login Credentials

After you connect to the dashboard you will need to login for secure access. Rook creates a default user named admin and generates a secret called rook-ceph-dashboard-password in the namespace where the Rook Ceph cluster is running. To retrieve the generated password, you can run the following:

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

Create firewall-rule for ROOK CEPH web dashboard

NODE_PORT=$(kubectl -n rook-ceph get svc rook-ceph-mgr-dashboard-external-https \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')

Create firewall rule for ROOK CEPH dashboard

this is for simulation purpose, not recommended using this, other option available, use port from previous 32269 run in azure cli :

az network nsg rule create -g my-resource-group \
-n kubernetes-allow-ceph-dashboard \
--access allow \
--destination-address-prefix '*' \
--destination-port-range 32269\
--direction inbound \
--nsg-name kubernetes-nsg \
--protocol tcp \
--source-address-prefix '*' \
--source-port-range '*' \
--priority 1003

Get worker-0 external ip

az vm list-ip-addresses -g my-resource-group  -n worker-0 --query "[][*].ipAddress" -o tsv20.124.98.81

Ceph dashboard url and login using user admin password from login credential

SSH to controller-0 Nodes

ssh kuberoot@

Create pool shared filesystem in CEPH (cephfs)

cd ~
cd rook/cluster/examples/kubernetes/ceph/
kubectl create -f filesystem.yaml

Create storage class for cephfs

kubectl create -f csi/cephfs/storageclass.yaml
kubectl get sc

We are done!!!

In the next step we will deploy open5gs and access to rancher webgui, open5gs gui , kiali and grafana.

Really appreciated the info from below references, if any question just ask :)






Interested to learn new technology. Cloud native, 5G, Open Source, Blockchain, etc.

Love podcasts or audiobooks? Learn on the go with our new app.

Learn Powerful Full-Text Searches with MongoDB Atlas Search

From the Layers of an OS to Demystifying ls *.c for Everyone!

APIs and why they are so helpful to DevSecOps

May RoundUp of All Things MoT

Github Collaborations.

Testing Unmanaged Databases in Django

Now we will release the beta version ,We will show how we are unique and different from other…

Webinar Video: The Model is the Code

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Indo's lab

Indo's lab

Interested to learn new technology. Cloud native, 5G, Open Source, Blockchain, etc.

More from Medium

Kubernetes Service Operator for Oracle Cloud Infrastructure Streaming — First steps

Kubernetes Events : Introduction and Filtering

Managing Network Security Lifecycles in Multi Cluster OpenShift Environments with OpenShift…

GUI-Based Openshift Deployments Using Advanced Cluster Management