Install metrics server on Kubernetes 1.23

Metrics server is used for gathering kubernetes nodes and pods cpu and ram utilization. In this tutorial ,we will install metrics server on Kubernetes 1.23.

Install metrics server on Kubernetes 1.23

Prerequisites

Working Kubernetes setup

[ved@arch ~]$ kubectl get nodes
NAME                  STATUS   ROLES                  AGE    VERSION
master1.example.com   Ready    control-plane,master   5d1h   v1.23.4
master2.example.com   Ready    control-plane,master   5d1h   v1.23.4
master3.example.com   Ready    control-plane,master   5d1h   v1.23.4
node1.example.com     Ready    <none>                 5d1h   v1.23.4
node2.example.com     Ready    <none>                 5d1h   v1.23.4
node3.example.com     Ready    <none>                 5d1h   v1.23.4
node4.example.com     Ready    <none>                 5d1h   v1.23.4

Lets create a yaml file containing the metrics server configuration and then deploy the metrics server. Please refer metrics server for releases.

[ved@arch ~]$ vim metrics-server-components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls 
        - --metric-resolution=15s
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
[ved@arch ~]$ kubectl apply -f metrics-server-components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

Check pods in the kube-system namespace for newly deployed metrics-server

[ved@arch ~]$ kubectl get pods -n kube-system -o wide
NAME                                          READY   STATUS    RESTARTS        AGE    IP                NODE                  NOMINATED NODE   READINESS GATES
coredns-64897985d-67p2p                       1/1     Running   2 (141m ago)    22h    10.38.0.1         node1.example.com     <none>           <none>
coredns-64897985d-zkcn7                       1/1     Running   0               30h    10.47.0.3         node3.example.com     <none>           <none>
kube-apiserver-master1.example.com            1/1     Running   13 (163m ago)   22h    192.168.122.113   master1.example.com   <none>           <none>
kube-apiserver-master2.example.com            1/1     Running   4 (163m ago)    23h    192.168.122.114   master2.example.com   <none>           <none>
kube-apiserver-master3.example.com            1/1     Running   4 (163m ago)    22h    192.168.122.115   master3.example.com   <none>           <none>
kube-controller-manager-master1.example.com   1/1     Running   24 (163m ago)   5d1h   192.168.122.113   master1.example.com   <none>           <none>
kube-controller-manager-master2.example.com   1/1     Running   20 (163m ago)   5d1h   192.168.122.114   master2.example.com   <none>           <none>
kube-controller-manager-master3.example.com   1/1     Running   23 (163m ago)   5d1h   192.168.122.115   master3.example.com   <none>           <none>
kube-proxy-2b84t                              1/1     Running   13 (163m ago)   5d1h   192.168.122.114   master2.example.com   <none>           <none>
kube-proxy-4nxdj                              1/1     Running   12 (163m ago)   5d1h   192.168.122.113   master1.example.com   <none>           <none>
kube-proxy-6st5h                              1/1     Running   19 (38m ago)    5d1h   192.168.122.111   node2.example.com     <none>           <none>
kube-proxy-82vnl                              1/1     Running   14 (141m ago)   5d1h   192.168.122.110   node1.example.com     <none>           <none>
kube-proxy-bzsqm                              1/1     Running   22 (163m ago)   5d1h   192.168.122.115   master3.example.com   <none>           <none>
kube-proxy-s7ttn                              1/1     Running   5 (145m ago)    5d1h   192.168.122.116   node4.example.com     <none>           <none>
kube-proxy-z2k46                              1/1     Running   0               5d1h   192.168.122.112   node3.example.com     <none>           <none>
kube-scheduler-master1.example.com            1/1     Running   23 (163m ago)   5d1h   192.168.122.113   master1.example.com   <none>           <none>
kube-scheduler-master2.example.com            1/1     Running   15 (163m ago)   5d1h   192.168.122.114   master2.example.com   <none>           <none>
kube-scheduler-master3.example.com            1/1     Running   25 (163m ago)   5d1h   192.168.122.115   master3.example.com   <none>           <none>
metrics-server-574849569f-ng7xs               1/1     Running   0               35m    10.34.0.3         node2.example.com     <none>           <none>
weave-net-bmfvp                               2/2     Running   0               21h    192.168.122.112   node3.example.com     <none>           <none>
weave-net-c9pbn                               2/2     Running   11 (145m ago)   21h    192.168.122.116   node4.example.com     <none>           <none>
weave-net-hfh5l                               2/2     Running   4 (163m ago)    21h    192.168.122.114   master2.example.com   <none>           <none>
weave-net-rmmdf                               2/2     Running   4 (163m ago)    21h    192.168.122.113   master1.example.com   <none>           <none>
weave-net-sc8vc                               2/2     Running   4 (163m ago)    21h    192.168.122.115   master3.example.com   <none>           <none>
weave-net-srpnf                               2/2     Running   19 (38m ago)    21h    192.168.122.111   node2.example.com     <none>           <none>
weave-net-xgx2b                               2/2     Running   11 (141m ago)   21h    192.168.122.110   node1.example.com 

Lets check nodes utilization using below kubectl command

[ved@arch ~]$ kubectl top nodes
NAME                  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master1.example.com   131m         6%     777Mi           44%       
master2.example.com   149m         7%     732Mi           42%       
master3.example.com   116m         5%     735Mi           42%       
node1.example.com     76m          3%     588Mi           33%       
node2.example.com     43m          2%     649Mi           37%       
node3.example.com     65m          3%     469Mi           27%       
node4.example.com     27m          1%     398Mi           22%     

Lets check pods utilization using Kubernetes metrics server top command for the kube-system namespace

[ved@arch ~]$ kubectl top pods -n kube-system
NAME                                          CPU(cores)   MEMORY(bytes)   
coredns-64897985d-67p2p                       2m           16Mi            
coredns-64897985d-zkcn7                       2m           16Mi            
kube-apiserver-master1.example.com            61m          304Mi           
kube-apiserver-master2.example.com            51m          280Mi           
kube-apiserver-master3.example.com            49m          288Mi           
kube-controller-manager-master1.example.com   18m          50Mi            
kube-controller-manager-master2.example.com   2m           20Mi            
kube-controller-manager-master3.example.com   2m           25Mi            
kube-proxy-2b84t                              1m           15Mi            
kube-proxy-4nxdj                              1m           15Mi            
kube-proxy-6st5h                              1m           13Mi            
kube-proxy-82vnl                              1m           15Mi            
kube-proxy-bzsqm                              1m           16Mi            
kube-proxy-s7ttn                              1m           16Mi            
kube-proxy-z2k46                              1m           15Mi            
kube-scheduler-master1.example.com            3m           21Mi            
kube-scheduler-master2.example.com            3m           19Mi            
kube-scheduler-master3.example.com            4m           22Mi            
metrics-server-574849569f-ng7xs               4m           24Mi            
weave-net-bmfvp                               15m          90Mi            
weave-net-c9pbn                               2m           80Mi            
weave-net-hfh5l                               2m           74Mi            
weave-net-rmmdf                               2m           72Mi            
weave-net-sc8vc                               2m           71Mi            
weave-net-srpnf                               2m           81Mi            
weave-net-xgx2b                               2m           80Mi     

Conclusion

We have successfully Install metrics server on Kubernetes and verified nodes and pods utilization using Kubernetes metrics server.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments