Skip to content

Skipping metric because of error: Metric label not set. #417

@Trackhe

Description

@Trackhe

kubernetesui/dashboard:v2.0.0-rc3
k8s.gcr.io/metrics-server-arm64:v0.3.6
kubernetesui/metrics-scraper:v1.0.3

kubectl top pods seems not working.
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

watch kubectl get pods --all-namespaces

NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-5b644bc49c-mw7nq     1/1     Running   8          2d1h
kube-system            calico-node-6dn85                            1/1     Running   2          2d1h
kube-system            coredns-6955765f44-ncqtf                     1/1     Running   2          2d1h
kube-system            coredns-6955765f44-s9dn9                     1/1     Running   2          2d1h
kube-system            etcd-raspnode02                              1/1     Running   2          2d1h
kube-system            kube-apiserver-raspnode02                    1/1     Running   2          2d1h
kube-system            kube-controller-manager-raspnode02           1/1     Running   3          2d1h
kube-system            kube-proxy-ct9g5                             1/1     Running   2          2d1h
kube-system            kube-scheduler-raspnode02                    1/1     Running   3          2d1h
kube-system            metrics-server-555c48b4b7-z9wk8              1/1     Running   0          28h
kubernetes-dashboard   dashboard-metrics-scraper-7b8b58dc8b-rlrdf   1/1     Running   0          30h
kubernetes-dashboard   kubernetes-dashboard-7867cbccbb-5s6h6        1/1     Running   0          30h

metrics server log:

I0131 18:20:35.663988       1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0131 18:21:00.523642       1 secure_serving.go:116] Serving securely on [::]:4443

dashboard-metrics-scrapper log:

169.254.219.245 - - [01/Feb/2020:22:57:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.17"
169.254.219.245 - - [01/Feb/2020:22:57:25 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:57:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.17"
169.254.219.245 - - [01/Feb/2020:22:57:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.17"
169.254.219.245 - - [01/Feb/2020:22:57:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.17"
169.254.219.245 - - [01/Feb/2020:22:57:55 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:57:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.17"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.17"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/calico-node-6dn85,kube-proxy-ct9g5/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02,etcd-raspnode02,kube-apiserver-raspnode02/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02,etcd-raspnode02,kube-apiserver-raspnode02/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/calico-node-6dn85,kube-proxy-ct9g5/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:07 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:08 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:08 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:08 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:08 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
{"level":"error","msg":"Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)","time":"2020-02-01T22:58:11Z"}
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,coredns-6955765f44-ncqtf,coredns-6955765f44-s9dn9/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02,etcd-raspnode02,kube-apiserver-raspnode02/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02,etcd-raspnode02,kube-apiserver-raspnode02/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kubernetes-dashboard/pod-list/dashboard-metrics-scraper-7b8b58dc8b-rlrdf,kubernetes-dashboard-7867cbccbb-5s6h6/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:15 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8,calico-kube-controllers-5b644bc49c-mw7nq,calico-node-6dn85,coredns-6955765f44-ncqtf,kube-proxy-ct9g5,coredns-6955765f44-s9dn9,kube-controller-manager-raspnode02,kube-scheduler-raspnode02/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:16 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8/metrics/cpu/usage_rate HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"
169.254.219.245 - - [01/Feb/2020:22:58:16 +0000] "GET /api/v1/dashboard/namespaces/kube-system/pod-list/metrics-server-555c48b4b7-z9wk8/metrics/memory/usage HTTP/1.1" 200 14 "" "dashboard/v2.0.0-rc3"

kubernetes-dashboard log:

2020/02/01 23:01:20 [2020-02-01T23:01:20Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:21 [2020-02-01T23:01:21Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 169.254.219.245:36052: 
2020/02/01 23:01:21 Getting list of namespaces
2020/02/01 23:01:21 [2020-02-01T23:01:21Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:25 [2020-02-01T23:01:25Z] Incoming HTTP/2.0 GET /api/v1/pod/%!?(MISSING)itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 169.254.219.245:36052: 
2020/02/01 23:01:25 Getting list of all pods in the cluster
2020/02/01 23:01:25 received 0 resources from sidecar instead of 10
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 received 0 resources from sidecar instead of 10
2020/02/01 23:01:25 Getting pod metrics
2020/02/01 23:01:25 received 0 resources from sidecar instead of 8
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 received 0 resources from sidecar instead of 8
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 [2020-02-01T23:01:25Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:26 [2020-02-01T23:01:26Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 169.254.219.245:36052: 
2020/02/01 23:01:26 Getting list of namespaces
2020/02/01 23:01:26 [2020-02-01T23:01:26Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 169.254.219.245:36052: 
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 169.254.219.245:36052: 
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/pod/kubernetes-dashboard/kubernetes-dashboard-7867cbccbb-5s6h6 request from 169.254.219.245:36052: 
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/pod/kubernetes-dashboard/kubernetes-dashboard-7867cbccbb-5s6h6/event?itemsPerPage=10&page=1 request from 169.254.219.245:36052: 
2020/02/01 23:01:27 Getting events related to a pod in namespace
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 169.254.219.245:36052: { contents hidden }
2020/02/01 23:01:27 Getting details of kubernetes-dashboard-7867cbccbb-5s6h6 pod in kubernetes-dashboard namespace
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 received 0 resources from sidecar instead of 1
2020/02/01 23:01:27 received 0 resources from sidecar instead of 1
2020/02/01 23:01:27 No persistentvolumeclaims found related to kubernetes-dashboard-7867cbccbb-5s6h6 pod
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/pod/kubernetes-dashboard/kubernetes-dashboard-7867cbccbb-5s6h6/persistentvolumeclaim?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 169.254.219.245:36052: 
2020/02/01 23:01:27 No persistentvolumeclaims found related to kubernetes-dashboard-7867cbccbb-5s6h6 pod
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code

in the deployment progress i had the problem that metrics runns in an error with --kubelet-insecure-tls
in the log was --kubelet-insecure-tls doesnt supported. i remove it. restart add it and no error in the log seems working. but obviously not. by the way srry for my bad english.

and i had the problem before that. it downloads the amd64 version previously. i fix it in the deployment.yaml.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions