Azure Self-managed Kubernetes High Availability for Open5gs [part 2]
[Part 2] Deploy Istio, Rancher and ROOK CEPH in Self-managed Kubernetes High Availability in Azure
open5gs Lab Topology
source : https://assyafii.com/docs/5g-cloud-native-simulation-with-open5gs/
Only Controller-0 node
SSH to controller-0 Nodes
ssh kuberoot@20.106.131.198
Create Namespaces for open5gs
kubectl create ns open5gs
Install Service mesh Istio (optional)
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.12.2 sh -
cd istio-1.12.2/
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later:
kubectl label namespace open5gs istio-injection=enabled
Install Addons packages
cd ~/istio-1.12.2/samples/addons
kubectl apply -f prometheus.yaml #for data sources monitoring
kubectl apply -f kiali.yaml #for dashboard visualization istio
kubectl apply -f jaeger.yaml #for tracing log
kubectl apply -f grafana.yaml #for dashboard monitoring (visualization)
Install HELM
We use HELM for automatic deployment in kubernetes.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version
Install Rancher (optional)
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
Install crds Cert Manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.1/cert-manager.crds.yaml
Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
Update your local Helm chart repository cache
helm repo update
Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.5.1
kubectl get pods --namespace cert-manager
Install Rancher with Helm
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set replicas=3
Wait for Rancher to be rolled out
kubectl -n cattle-system rollout status deploy/rancher
Verify that the Rancher Server is Successfully Deployed
kubectl -n cattle-system get deploy rancher
Install ROOK CEPH from Controller-0
cd ~
git clone --single-branch --branch v1.7.2 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph/
Deploy the Rook Operator
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
Create a Ceph Cluster
kubectl create -f cluster.yaml
Wait until all pods running
kubectl get pods -n rook-ceph --watch
ctrl + c
kubectl get pods -n rook-ceph
Deploy Rook Ceph toolbox
The Rook Ceph toolbox is a container with common tools used for rook debugging and testing.
cd ~
cd rook/cluster/examples/kubernetes/ceph
kubectl apply -f toolbox.yaml
Viewing the Dashboard External to the Cluster
Node Port
The simplest way to expose the service is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort.
Now create the service:
kubectl create -f dashboard-external-https.yaml
You will see the new service rook-ceph-mgr-dashboard-external-https
created:
kubectl -n rook-ceph get service
Once the rook-ceph-tools pod is running, you can connect to it with:
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
All available tools in the toolbox are ready for your troubleshooting needs.
Example:
ceph status
ceph osd status
ceph df
rados df
Exit to controller-0
exit
Login Credentials
After you connect to the dashboard you will need to login for secure access. Rook creates a default user named admin
and generates a secret called rook-ceph-dashboard-password
in the namespace where the Rook Ceph cluster is running. To retrieve the generated password, you can run the following:
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
!Nzy~j6ZbK3.%B,"~"dC
Create firewall-rule for ROOK CEPH web dashboard
NODE_PORT=$(kubectl -n rook-ceph get svc rook-ceph-mgr-dashboard-external-https \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')echo $NODE_PORT
32269exit
Create firewall rule for ROOK CEPH dashboard
this is for simulation purpose, not recommended using this, other option available, use port from previous 32269 run in azure cli :
az network nsg rule create -g my-resource-group \
-n kubernetes-allow-ceph-dashboard \
--access allow \
--destination-address-prefix '*' \
--destination-port-range 32269\
--direction inbound \
--nsg-name kubernetes-nsg \
--protocol tcp \
--source-address-prefix '*' \
--source-port-range '*' \
--priority 1003
Get worker-0 external ip
az vm list-ip-addresses -g my-resource-group -n worker-0 --query "[].virtualMachine.network.publicIpAddresses[*].ipAddress" -o tsv20.124.98.81
Ceph dashboard url and login using user admin password from login credential
https://20.124.98.81:32269
SSH to controller-0 Nodes
ssh kuberoot@20.106.131.198
Create pool shared filesystem in CEPH (cephfs)
cd ~
cd rook/cluster/examples/kubernetes/ceph/
kubectl create -f filesystem.yaml
Create storage class for cephfs
kubectl create -f csi/cephfs/storageclass.yaml
kubectl get sc
We are done!!!
In the next step we will deploy open5gs and access to rancher webgui, open5gs gui , kiali and grafana.
Really appreciated the info from below references, if any question just ask :)
Enjoy!!!
References: