gashirar's blog

ウイスキーがすき/美味しいものがすき/k8sがすき

kubectl get allすると「Throttling request took~」と表示される

kubeadmでv1.18のK8sクラスタを構築し、状態を見ようとkubectl get pod -A

$ kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-5b87c97847-4ttpm     1/1     Running   0          20m
kube-system            calico-node-4sxf7                            1/1     Running   0          20m
kube-system            calico-node-77j2m                            1/1     Running   0          19m
kube-system            calico-node-mn9wv                            1/1     Running   0          19m
kube-system            calico-node-snppb                            1/1     Running   0          19m
kube-system            coredns-66bff467f8-2hnmq                     1/1     Running   0          19m
kube-system            coredns-66bff467f8-tttfx                     1/1     Running   0          19m
kube-system            etcd-k8s-master-ad-1-0                       1/1     Running   0          21m
kube-system            kube-apiserver-k8s-master-ad-1-0             1/1     Running   0          21m
kube-system            kube-controller-manager-k8s-master-ad-1-0    1/1     Running   0          21m
kube-system            kube-proxy-pzws8                             1/1     Running   0          19m
kube-system            kube-proxy-t6zlx                             1/1     Running   0          19m
kube-system            kube-proxy-tsfbf                             1/1     Running   0          20m
kube-system            kube-proxy-vm7gq                             1/1     Running   0          19m
kube-system            kube-scheduler-k8s-master-ad-1-0             1/1     Running   0          21m
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-rkngz   1/1     Running   0          20m
kubernetes-dashboard   kubernetes-dashboard-67768d44c-v8hk6         1/1     Running   0          20m

API ServerやController Manager、kube-proxyなどの主要なコンポーネントが起動しており問題はなさそうです。

さらに他のResourceも見ようとkubectl get allしたのですが、下記のような出力がされなんだか不穏な感じに。

$ kubectl get all
I0621 08:47:27.465470    4117 request.go:621] Throttling request took 1.160347495s, request: GET:https://10.0.0.2:6443/apis/discovery.k8s.io/v1beta1?timeout=32s
I0621 08:47:37.465472    4117 request.go:621] Throttling request took 1.997902393s, request: GET:https://10.0.0.2:6443/apis/authorization.k8s.io/v1?timeout=32s
I0621 08:47:47.465490    4117 request.go:621] Throttling request took 4.797741119s, request: GET:https://10.0.0.2:6443/apis/scheduling.k8s.io/v1?timeout=32s
I0621 08:47:58.265434    4117 request.go:621] Throttling request took 1.198248957s, request: GET:https://10.0.0.2:6443/apis/apiregistration.k8s.io/v1?timeout=32s
I0621 08:48:08.465434    4117 request.go:621] Throttling request took 4.198026222s, request: GET:https://10.0.0.2:6443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0621 08:48:19.865431    4117 request.go:621] Throttling request took 1.197977955s, request: GET:https://10.0.0.2:6443/apis/apps/v1?timeout=32s
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   19m

実はこれ、v1.18のNew Featureである「API Request Throttling」の影響のようです。

API request throttling (due to a high rate of requests) is now reported in client-go logs at log level 2.

kubernetes.io

上記にあるように、client-go logsとして出力されるみたいですね。

The presence of these messages may indicate to the administrator the need to tune the cluster accordingly.

ともあるように、クラスタのチューニングはした方がよいのかもしれません。 どのパラメータを変更すればいいの?という部分についてはまた時間をとって調べてみます。