This repository was archived by the owner on Mar 9, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 347
This repository was archived by the owner on Mar 9, 2022. It is now read-only.
failed to find task: no running task found #316
Copy link
Copy link
Closed
Labels
Milestone
Description
I'm using cri-containerd 1.0.0-alpha.0 with Kubernetes 1.8 and I'm getting the following errors when running a pod:
kubectl errors
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-7797cb8758-85jjt 0/3 rpc error: code = Unknown desc = failed to find task: no running task found: task aadd31f2c23113dd9ac111eeecf3b54e5956e74f5efaaab925064438f707fa74 not found: not found 1680 41m
kube-dns-7797cb8758-bj6vd 0/3 rpc error: code = Unknown desc = failed to find task: no running task found: task 3ccdf15ed6b84f7bc0a4ffd2de36063fe4348d02dc9ad4ae77b6bb2056814c76 not found: not found 1257 41m
I'm in the process of troubleshooting this issue, but figured I'd file this and post my logs just in case others are running into problems.
kubelet logs
kubelet --version
Kubernetes v1.8.0
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548694 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"7f0d88eb26d235e59a01357242c492511c390a8e1c578541f8bbcdcb32b442ed"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548713 2379 pod_container_deletor.go:77] Container "7f0d88eb26d235e59a01357242c492511c390a8e1c578541f8bbcdcb32b442ed" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548723 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"81d54a771054d36995f5649f937ff16b18af47c463b5be2bf47689595bc1a680"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548736 2379 pod_container_deletor.go:77] Container "81d54a771054d36995f5649f937ff16b18af47c463b5be2bf47689595bc1a680" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548754 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"b14c3161014e8ca37ec07c6f7f595bcdcd62701b485c94912c67e34ed2ebc201"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548765 2379 pod_container_deletor.go:77] Container "b14c3161014e8ca37ec07c6f7f595bcdcd62701b485c94912c67e34ed2ebc201" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548782 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"2cae0a143153cecc8e70c525e7b2df737358301eaccbd972a13720acfe2e2e95"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548794 2379 pod_container_deletor.go:77] Container "2cae0a143153cecc8e70c525e7b2df737358301eaccbd972a13720acfe2e2e95" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548803 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"8a01b6a3600f82f9b61343211e271bda917448d7be488ee652c5e47fd9f231a5"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548813 2379 pod_container_deletor.go:77] Container "8a01b6a3600f82f9b61343211e271bda917448d7be488ee652c5e47fd9f231a5" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548821 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"190ed6381f92d270888b6675ba93f997453b61cde847bde4f62e2e2bc2f08f50"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548831 2379 pod_container_deletor.go:77] Container "190ed6381f92d270888b6675ba93f997453b61cde847bde4f62e2e2bc2f08f50" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548840 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"f60c24daf9b7251df1c98d1a633d0ac9b0a1945445c5e6acca6e0ce1c04528f7"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548849 2379 pod_container_deletor.go:77] Container "f60c24daf9b7251df1c98d1a633d0ac9b0a1945445c5e6acca6e0ce1c04528f7" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548867 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"ed9194bb7bb8000bf64e6b06e1ad7cb62ef114bf41a58a9506f42466d9ea03a9"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548878 2379 pod_container_deletor.go:77] Container "ed9194bb7bb8000bf64e6b06e1ad7cb62ef114bf41a58a9506f42466d9ea03a9" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548886 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"c5249d485df38f2c05803e21b32a0eb6baa12ce368fe99c56e83cc6891a29471"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548896 2379 pod_container_deletor.go:77] Container "c5249d485df38f2c05803e21b32a0eb6baa12ce368fe99c56e83cc6891a29471" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548904 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"df10430e976a0ebe7b8266da9eaeed51972ea4e952af114152ab6480c5201e5e"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.548915 2379 pod_container_deletor.go:77] Container "df10430e976a0ebe7b8266da9eaeed51972ea4e952af114152ab6480c5201e5e" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.548924 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"d100ecf7dbff4542aef0a23932a002eb908365cb9fb712fa2e60998a13f8fe68"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551745 2379 pod_container_deletor.go:77] Container "d100ecf7dbff4542aef0a23932a002eb908365cb9fb712fa2e60998a13f8fe68" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551764 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"3daea57401248f7bc2df9f900da7758e97883d66b793b678f97620cba2c291de"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551795 2379 pod_container_deletor.go:77] Container "3daea57401248f7bc2df9f900da7758e97883d66b793b678f97620cba2c291de" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551807 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"5bda1bc3fbd2e1e14da66c9a20e1eeaf1a2ffe5e00f90d5b39da4ee62fb06018"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551819 2379 pod_container_deletor.go:77] Container "5bda1bc3fbd2e1e14da66c9a20e1eeaf1a2ffe5e00f90d5b39da4ee62fb06018" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551827 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"3e17d958c76759b999345fb5746ea17dd0d0256d8600b76d8f0363c0c6d96b49"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551837 2379 pod_container_deletor.go:77] Container "3e17d958c76759b999345fb5746ea17dd0d0256d8600b76d8f0363c0c6d96b49" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551846 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"ca536731e42bbf4bccd3b9aed09d88341eceb7bc7adfd3b3d368df5107462241"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551866 2379 pod_container_deletor.go:77] Container "ca536731e42bbf4bccd3b9aed09d88341eceb7bc7adfd3b3d368df5107462241" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551876 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"3d1b8c3fbd6e6bc3e34439a3b42be64f5a731fce7a41506d77732935bbc09043"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551886 2379 pod_container_deletor.go:77] Container "3d1b8c3fbd6e6bc3e34439a3b42be64f5a731fce7a41506d77732935bbc09043" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551894 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"371631b106ad8d0ccc222db6239879d49ac956f266a16b33bffeaebec8f651cc"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551904 2379 pod_container_deletor.go:77] Container "371631b106ad8d0ccc222db6239879d49ac956f266a16b33bffeaebec8f651cc" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551913 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"992fe6b7a1879a6fa2d8cd84b919d66e2e7628b1e995959493418c6c909a5f38"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551922 2379 pod_container_deletor.go:77] Container "992fe6b7a1879a6fa2d8cd84b919d66e2e7628b1e995959493418c6c909a5f38" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551940 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"d5b6145e818e65fed608b2587257409b6e54c3a30628d800d979f03660d9f85e"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551951 2379 pod_container_deletor.go:77] Container "d5b6145e818e65fed608b2587257409b6e54c3a30628d800d979f03660d9f85e" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551960 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"344d48beadd56bc93c3e78cce239d258400d5f76158ceb6e80dd1333956df952"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551970 2379 pod_container_deletor.go:77] Container "344d48beadd56bc93c3e78cce239d258400d5f76158ceb6e80dd1333956df952" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.551978 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"7785ba70c8feb709772ec75e45ed9449b2382b95c32cd11e238f38d74005eb13"}
Oct 01 21:40:14 worker-0 kubelet[2379]: W1001 21:40:14.551987 2379 pod_container_deletor.go:77] Container "7785ba70c8feb709772ec75e45ed9449b2382b95c32cd11e238f38d74005eb13" not found in pod's containers
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.852751 2379 kuberuntime_manager.go:499] Container {Name:kubedns Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 Command:[] Args:[--domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2] WorkingDir: Ports:[{Name:dns-local HostPort:0 ContainerPort:10053 Protocol:UDP HostIP:} {Name:dns-tcp-local HostPort:0 ContainerPort:10053 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:10055 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:PROMETHEUS_PORT Value:10055 ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/kube-dns-config SubPath: MountPropagation:<nil>} {Name:kube-dns-token-klbrd ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/kubedns,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:8081,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.852829 2379 kuberuntime_manager.go:499] Container {Name:dnsmasq Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 Command:[] Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:150 scale:-3} d:{Dec:<nil>} s:150m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/etc/k8s/dns/dnsmasq-nanny SubPath: MountPropagation:<nil>} {Name:kube-dns-token-klbrd ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 01 21:40:14 worker-0 kubelet[2379]: I1001 21:40:14.852877 2379 kuberuntime_manager.go:499] Container {Name:sidecar Image:gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 Command:[] Args:[--v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A] WorkingDir: Ports:[{Name:metrics HostPort:0 ContainerPort:10054 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-token-klbrd ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 01 21:40:15 worker-0 kubelet[2379]: I1001 21:40:15.588465 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"35c19f891547385ad8601f01604111f3810c1b7e6f2acd61e453381824a45a77"}
Oct 01 21:40:15 worker-0 kubelet[2379]: W1001 21:40:15.588514 2379 pod_container_deletor.go:77] Container "35c19f891547385ad8601f01604111f3810c1b7e6f2acd61e453381824a45a77" not found in pod's containers
Oct 01 21:40:15 worker-0 kubelet[2379]: I1001 21:40:15.588529 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"3db5389c898d71e25b6e5cd9461c9c819217c4dc16444deff2525fd1081f3087"}
Oct 01 21:40:15 worker-0 kubelet[2379]: W1001 21:40:15.588547 2379 pod_container_deletor.go:77] Container "3db5389c898d71e25b6e5cd9461c9c819217c4dc16444deff2525fd1081f3087" not found in pod's containers
Oct 01 21:40:15 worker-0 kubelet[2379]: I1001 21:40:15.588567 2379 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)", event: &pleg.PodLifecycleEvent{ID:"208364ca-a6ed-11e7-9591-42010af0000c", Type:"ContainerDied", Data:"7cbee89c5c754bf06b9516fada95d982142b06cb6acd9389e65f8a3adcf1d175"}
Oct 01 21:40:15 worker-0 kubelet[2379]: W1001 21:40:15.588579 2379 pod_container_deletor.go:77] Container "7cbee89c5c754bf06b9516fada95d982142b06cb6acd9389e65f8a3adcf1d175" not found in pod's containers
Oct 01 21:40:16 worker-0 kubelet[2379]: I1001 21:40:16.522936 2379 kubelet.go:1904] SyncLoop (container unhealthy): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)"
Oct 01 21:40:19 worker-0 kubelet[2379]: I1001 21:40:19.583483 2379 kubelet.go:1904] SyncLoop (container unhealthy): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)"
Oct 01 21:40:21 worker-0 kubelet[2379]: I1001 21:40:21.493430 2379 kubelet.go:1904] SyncLoop (container unhealthy): "kube-dns-7797cb8758-85jjt_kube-system(208364ca-a6ed-11e7-9591-42010af0000c)"
Oct 01 21:40:54 worker-0 kubelet[2379]: I1001 21:40:54.751958 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:41:54 worker-0 kubelet[2379]: E1001 21:41:54.644025 2379 kubelet.go:1230] Image garbage collection failed multiple times in a row: no imagefs label for configured runtime
Oct 01 21:41:54 worker-0 kubelet[2379]: I1001 21:41:54.753178 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:42:54 worker-0 kubelet[2379]: I1001 21:42:54.754671 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:43:54 worker-0 kubelet[2379]: I1001 21:43:54.755891 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:44:54 worker-0 kubelet[2379]: I1001 21:44:54.757938 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:45:54 worker-0 kubelet[2379]: I1001 21:45:54.759340 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:46:54 worker-0 kubelet[2379]: E1001 21:46:54.644992 2379 kubelet.go:1230] Image garbage collection failed multiple times in a row: no imagefs label for configured runtime
Oct 01 21:46:54 worker-0 kubelet[2379]: I1001 21:46:54.760739 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:47:54 worker-0 kubelet[2379]: I1001 21:47:54.761902 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:48:54 worker-0 kubelet[2379]: I1001 21:48:54.763161 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:49:54 worker-0 kubelet[2379]: I1001 21:49:54.764542 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:50:54 worker-0 kubelet[2379]: I1001 21:50:54.766008 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:51:54 worker-0 kubelet[2379]: E1001 21:51:54.645941 2379 kubelet.go:1230] Image garbage collection failed multiple times in a row: no imagefs label for configured runtime
Oct 01 21:51:54 worker-0 kubelet[2379]: I1001 21:51:54.767145 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:52:54 worker-0 kubelet[2379]: I1001 21:52:54.768349 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:53:54 worker-0 kubelet[2379]: I1001 21:53:54.769474 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:54:54 worker-0 kubelet[2379]: I1001 21:54:54.770887 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:55:14 worker-0 kubelet[2379]: E1001 21:55:14.531136 2379 remote_runtime.go:246] RemoveContainer "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d" from runtime service failed: rpc error: code = Unknown desc = failed to delete containerd container "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d": context deadline exceeded: unknown
Oct 01 21:55:14 worker-0 kubelet[2379]: E1001 21:55:14.531505 2379 kuberuntime_gc.go:125] Failed to remove container "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d": rpc error: code = Unknown desc = failed to delete containerd container "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d": context deadline exceeded: unknown
Oct 01 21:55:14 worker-0 kubelet[2379]: E1001 21:55:14.861274 2379 remote_runtime.go:187] CreateContainer in sandbox "dfd986fd8c90a2ae865e898658d4d8a3626e8eca541bf4a5cdd2c677b806e3ba" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Oct 01 21:55:14 worker-0 kubelet[2379]: E1001 21:55:14.861391 2379 kuberuntime_manager.go:714] container start failed: CreateContainerError: context deadline exceeded
Oct 01 21:55:54 worker-0 kubelet[2379]: I1001 21:55:54.771980 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:56:54 worker-0 kubelet[2379]: E1001 21:56:54.646907 2379 kubelet.go:1230] Image garbage collection failed multiple times in a row: no imagefs label for configured runtime
Oct 01 21:56:54 worker-0 kubelet[2379]: I1001 21:56:54.773208 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:57:54 worker-0 kubelet[2379]: I1001 21:57:54.774422 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:58:54 worker-0 kubelet[2379]: I1001 21:58:54.775515 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 21:59:54 worker-0 kubelet[2379]: I1001 21:59:54.776867 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 22:00:54 worker-0 kubelet[2379]: I1001 22:00:54.777973 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
Oct 01 22:01:54 worker-0 kubelet[2379]: E1001 22:01:54.647842 2379 kubelet.go:1230] Image garbage collection failed multiple times in a row: no imagefs label for configured runtime
Oct 01 22:01:54 worker-0 kubelet[2379]: I1001 22:01:54.779338 2379 qos_container_manager_linux.go:320] [ContainerManager]: Updated QoS cgroup configuration
cri-containerd logs
cri-containerd --version
1.0.0-alpha.0
Oct 01 21:39:41 worker-0 cri-containerd[2376]: E1001 21:39:41.835151 2376 instrumented_service.go:225] UpdateContainerResources for "2cae0a143153cecc8e70c525e7b2df737358301eaccbd972a13720acfe2e2e95" failed, error: failed to find task: no running task found: task 2cae0a143153cecc8e70c525e7b2df737358301eaccbd972a13720acfe2e2e95 not found: not found
Oct 01 21:39:41 worker-0 cri-containerd[2376]: E1001 21:39:41.881839 2376 instrumented_service.go:225] UpdateContainerResources for "7c65e7fde732a0435da03fa516202c1b2faca53f72d7be0ee53107fa0d2b8d87" failed, error: failed to find task: no running task found: task 7c65e7fde732a0435da03fa516202c1b2faca53f72d7be0ee53107fa0d2b8d87 not found: not found
Oct 01 21:39:41 worker-0 cri-containerd[2376]: E1001 21:39:41.932892 2376 instrumented_service.go:225] UpdateContainerResources for "c5249d485df38f2c05803e21b32a0eb6baa12ce368fe99c56e83cc6891a29471" failed, error: failed to find task: no running task found: task c5249d485df38f2c05803e21b32a0eb6baa12ce368fe99c56e83cc6891a29471 not found: not found
Oct 01 21:39:46 worker-0 cri-containerd[2376]: E1001 21:39:46.866708 2376 instrumented_service.go:225] UpdateContainerResources for "8428deae6799bbbf768f1da2ec66b5bb7d8288de74ddb1bc8bc135141198417b" failed, error: failed to find task: no running task found: task 8428deae6799bbbf768f1da2ec66b5bb7d8288de74ddb1bc8bc135141198417b not found: not found
Oct 01 21:39:46 worker-0 cri-containerd[2376]: E1001 21:39:46.906638 2376 instrumented_service.go:225] UpdateContainerResources for "cf58b80eb50b4c97c2e2c65387d1c957f800e3b812346875186d3bf1a7f92a34" failed, error: failed to find task: no running task found: task cf58b80eb50b4c97c2e2c65387d1c957f800e3b812346875186d3bf1a7f92a34 not found: not found
Oct 01 21:39:46 worker-0 cri-containerd[2376]: E1001 21:39:46.946375 2376 instrumented_service.go:225] UpdateContainerResources for "881371ef9ae8dd56fad57cb489c0e2ccb2e1cc7cb7d959b76a27e7e666674bd9" failed, error: failed to find task: no running task found: task 881371ef9ae8dd56fad57cb489c0e2ccb2e1cc7cb7d959b76a27e7e666674bd9 not found: not found
Oct 01 21:39:49 worker-0 cri-containerd[2376]: E1001 21:39:49.930043 2376 instrumented_service.go:225] UpdateContainerResources for "61b02b1309e8d1aa0efae73a5d2b82cb379468aede2e7b91cfb86d20c4079a54" failed, error: failed to find task: no running task found: task 61b02b1309e8d1aa0efae73a5d2b82cb379468aede2e7b91cfb86d20c4079a54 not found: not found
Oct 01 21:39:49 worker-0 cri-containerd[2376]: E1001 21:39:49.974671 2376 instrumented_service.go:225] UpdateContainerResources for "ded57f593438e543867ad93c27fe96cb0f25eb2b21e777406f425d2cbbe7b9a1" failed, error: failed to find task: no running task found: task ded57f593438e543867ad93c27fe96cb0f25eb2b21e777406f425d2cbbe7b9a1 not found: not found
Oct 01 21:39:50 worker-0 cri-containerd[2376]: E1001 21:39:50.011101 2376 instrumented_service.go:225] UpdateContainerResources for "344d48beadd56bc93c3e78cce239d258400d5f76158ceb6e80dd1333956df952" failed, error: failed to find task: no running task found: task 344d48beadd56bc93c3e78cce239d258400d5f76158ceb6e80dd1333956df952 not found: not found
Oct 01 21:39:51 worker-0 cri-containerd[2376]: E1001 21:39:51.835327 2376 instrumented_service.go:225] UpdateContainerResources for "3d1b8c3fbd6e6bc3e34439a3b42be64f5a731fce7a41506d77732935bbc09043" failed, error: failed to find task: no running task found: task 3d1b8c3fbd6e6bc3e34439a3b42be64f5a731fce7a41506d77732935bbc09043 not found: not found
Oct 01 21:39:51 worker-0 cri-containerd[2376]: E1001 21:39:51.873243 2376 instrumented_service.go:225] UpdateContainerResources for "37ebedf3ab85abf23c0938de94e956c47ae4b8185e06a6f88ca5e40a88abe631" failed, error: failed to find task: no running task found: task 37ebedf3ab85abf23c0938de94e956c47ae4b8185e06a6f88ca5e40a88abe631 not found: not found
Oct 01 21:39:51 worker-0 cri-containerd[2376]: E1001 21:39:51.916330 2376 instrumented_service.go:225] UpdateContainerResources for "2ed9d1f3a9d602023c4a12e860bf6d64683257a4f95af9e9c734b219980aba32" failed, error: failed to find task: no running task found: task 2ed9d1f3a9d602023c4a12e860bf6d64683257a4f95af9e9c734b219980aba32 not found: not found
Oct 01 21:39:56 worker-0 cri-containerd[2376]: E1001 21:39:56.899343 2376 instrumented_service.go:225] UpdateContainerResources for "c8465b9865b7e332da8432b3c81ae60d5ed3ca23b9a1cde647d9dd2b70519afb" failed, error: failed to find task: no running task found: task c8465b9865b7e332da8432b3c81ae60d5ed3ca23b9a1cde647d9dd2b70519afb not found: not found
Oct 01 21:39:56 worker-0 cri-containerd[2376]: E1001 21:39:56.950706 2376 instrumented_service.go:225] UpdateContainerResources for "d10f73f199286cdc87ff9b5f7d47e4bab7b0451a5d9ca113e24fbd57aee4a308" failed, error: failed to find task: no running task found: task d10f73f199286cdc87ff9b5f7d47e4bab7b0451a5d9ca113e24fbd57aee4a308 not found: not found
Oct 01 21:39:57 worker-0 cri-containerd[2376]: E1001 21:39:57.003858 2376 instrumented_service.go:225] UpdateContainerResources for "8a01b6a3600f82f9b61343211e271bda917448d7be488ee652c5e47fd9f231a5" failed, error: failed to find task: no running task found: task 8a01b6a3600f82f9b61343211e271bda917448d7be488ee652c5e47fd9f231a5 not found: not found
Oct 01 21:39:59 worker-0 cri-containerd[2376]: E1001 21:39:59.926163 2376 instrumented_service.go:225] UpdateContainerResources for "df10430e976a0ebe7b8266da9eaeed51972ea4e952af114152ab6480c5201e5e" failed, error: failed to find task: no running task found: task df10430e976a0ebe7b8266da9eaeed51972ea4e952af114152ab6480c5201e5e not found: not found
Oct 01 21:39:59 worker-0 cri-containerd[2376]: E1001 21:39:59.975650 2376 instrumented_service.go:225] UpdateContainerResources for "c66b303c1a947d79a023038434efd000f2c29ddcb262be8a6f3f4243b2ab7e84" failed, error: failed to find task: no running task found: task c66b303c1a947d79a023038434efd000f2c29ddcb262be8a6f3f4243b2ab7e84 not found: not found
Oct 01 21:40:00 worker-0 cri-containerd[2376]: E1001 21:40:00.041284 2376 instrumented_service.go:225] UpdateContainerResources for "190ed6381f92d270888b6675ba93f997453b61cde847bde4f62e2e2bc2f08f50" failed, error: failed to find task: no running task found: task 190ed6381f92d270888b6675ba93f997453b61cde847bde4f62e2e2bc2f08f50 not found: not found
Oct 01 21:40:01 worker-0 cri-containerd[2376]: E1001 21:40:01.833161 2376 instrumented_service.go:225] UpdateContainerResources for "8d5c233ef358f85174c9ce9181add3589c52d99aafe917c8fac89346e469f8d2" failed, error: failed to find task: no running task found: task 8d5c233ef358f85174c9ce9181add3589c52d99aafe917c8fac89346e469f8d2 not found: not found
Oct 01 21:40:01 worker-0 cri-containerd[2376]: E1001 21:40:01.870762 2376 instrumented_service.go:225] UpdateContainerResources for "3d6eaa4d33de657002c05be543af46ee6aa3ab26a1861df7bbaaa1e3cfdb5cff" failed, error: failed to find task: no running task found: task 3d6eaa4d33de657002c05be543af46ee6aa3ab26a1861df7bbaaa1e3cfdb5cff not found: not found
Oct 01 21:40:01 worker-0 cri-containerd[2376]: E1001 21:40:01.915003 2376 instrumented_service.go:225] UpdateContainerResources for "4cc8c3f49e206b27b735d89f031c55b0e2e2a4bdf72c3b6001d5fd1409ff6c80" failed, error: failed to find task: no running task found: task 4cc8c3f49e206b27b735d89f031c55b0e2e2a4bdf72c3b6001d5fd1409ff6c80 not found: not found
Oct 01 21:40:06 worker-0 cri-containerd[2376]: E1001 21:40:06.862601 2376 instrumented_service.go:225] UpdateContainerResources for "3e17d958c76759b999345fb5746ea17dd0d0256d8600b76d8f0363c0c6d96b49" failed, error: failed to find task: no running task found: task 3e17d958c76759b999345fb5746ea17dd0d0256d8600b76d8f0363c0c6d96b49 not found: not found
Oct 01 21:40:06 worker-0 cri-containerd[2376]: E1001 21:40:06.901281 2376 instrumented_service.go:225] UpdateContainerResources for "46e9b54f71f7b963c5d801e150af9af6c25ef00c2ca481ea797ea6afdb3a654f" failed, error: failed to find task: no running task found: task 46e9b54f71f7b963c5d801e150af9af6c25ef00c2ca481ea797ea6afdb3a654f not found: not found
Oct 01 21:40:06 worker-0 cri-containerd[2376]: E1001 21:40:06.943027 2376 instrumented_service.go:225] UpdateContainerResources for "baed0d0aede0170f8b4a3da233003334dee538c961c76ad02cfa9227d20470f8" failed, error: failed to find task: no running task found: task baed0d0aede0170f8b4a3da233003334dee538c961c76ad02cfa9227d20470f8 not found: not found
Oct 01 21:40:09 worker-0 cri-containerd[2376]: E1001 21:40:09.925104 2376 instrumented_service.go:225] UpdateContainerResources for "7f0d88eb26d235e59a01357242c492511c390a8e1c578541f8bbcdcb32b442ed" failed, error: failed to find task: no running task found: task 7f0d88eb26d235e59a01357242c492511c390a8e1c578541f8bbcdcb32b442ed not found: not found
Oct 01 21:40:09 worker-0 cri-containerd[2376]: E1001 21:40:09.964224 2376 instrumented_service.go:225] UpdateContainerResources for "736b6ec3be2e66a1d40d287c372c32de5bfc72dcc2ef1f28a3c3820b98143514" failed, error: failed to find task: no running task found: task 736b6ec3be2e66a1d40d287c372c32de5bfc72dcc2ef1f28a3c3820b98143514 not found: not found
Oct 01 21:40:10 worker-0 cri-containerd[2376]: E1001 21:40:10.011429 2376 instrumented_service.go:225] UpdateContainerResources for "35c19f891547385ad8601f01604111f3810c1b7e6f2acd61e453381824a45a77" failed, error: failed to find task: no running task found: task 35c19f891547385ad8601f01604111f3810c1b7e6f2acd61e453381824a45a77 not found: not found
Oct 01 21:40:11 worker-0 cri-containerd[2376]: E1001 21:40:11.834780 2376 instrumented_service.go:225] UpdateContainerResources for "b12989f6dda1dbdf43c4dba0cde829c3b1eef246fb7ea50241bd24164735d1aa" failed, error: failed to find task: no running task found: task b12989f6dda1dbdf43c4dba0cde829c3b1eef246fb7ea50241bd24164735d1aa not found: not found
Oct 01 21:40:11 worker-0 cri-containerd[2376]: E1001 21:40:11.873444 2376 instrumented_service.go:225] UpdateContainerResources for "aadd31f2c23113dd9ac111eeecf3b54e5956e74f5efaaab925064438f707fa74" failed, error: failed to find task: no running task found: task aadd31f2c23113dd9ac111eeecf3b54e5956e74f5efaaab925064438f707fa74 not found: not found
Oct 01 21:40:11 worker-0 cri-containerd[2376]: E1001 21:40:11.910775 2376 instrumented_service.go:225] UpdateContainerResources for "7733b51743ee749242910abba603d7aaf426d11dd969e9ae7451fa6425d70d18" failed, error: failed to find task: no running task found: task 7733b51743ee749242910abba603d7aaf426d11dd969e9ae7451fa6425d70d18 not found: not found
Oct 01 21:55:14 worker-0 cri-containerd[2376]: E1001 21:55:14.530019 2376 container_remove.go:56] failed to reset removing state for container "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d": failed to checkpoint status to "/var/lib/cri-containerd/containers/900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d/status": open /var/lib/cri-containerd/containers/900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d/.tmp-status399887194: no such file or directory
Oct 01 21:55:14 worker-0 cri-containerd[2376]: E1001 21:55:14.530050 2376 instrumented_service.go:174] RemoveContainer for "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d" failed, error: failed to delete containerd container "900c597053116579658a1e60742330dbab5569896642a8f53a74de2a5aed511d": context deadline exceeded: unknown
Oct 01 21:55:14 worker-0 cri-containerd[2376]: E1001 21:55:14.862694 2376 instrumented_service.go:112] CreateContainer within sandbox "dfd986fd8c90a2ae865e898658d4d8a3626e8eca541bf4a5cdd2c677b806e3ba" for &ContainerMetadata{Name:kubedns,Attempt:561,} failed, error: failed to create containerd container: context deadline exceeded: unknown
Reactions are currently unavailable